• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USA

Latest Posts

Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Flip Tanedo | USLHC | USA

Read Bio

Thoughts on how to pick a graduate school

Sunday, February 28th, 2010

It’s that time of year again: hard-working college seniors all over the world are getting e-mails from American universities offering them positions as PhD students in, among many other fields, physics. [Other countries have slightly different time-scales and procedures for PhD applications.] To all of you who have gotten these letters: congratulations!

Image from PhD Comics.

Image from PhD Comics, (c) Jorge Cham.

Now comes the hard part: you have to commit to a PhD program which will frame your education and research for the next 4 to 6(-ish) years. If you’ve gotten this far, then you already mastered the ‘rules of the game’ for your undergrad years: work hard, do well in courses, and start doing some research. Here’s the hitch:

Picking a grad school is the first of many decisions before you where there is no clear path and no obvious set of rules.

Welcome to grad school!

Since this can be a bit of an overwhelming decision, I’d like to offer my thoughts on this matter with the caveat that they are based on my own personal experience in theoretical particle physics and may not apply to everyone. (I’ll do my best to be as general and objective as possible.) Most of my thoughts on this matter are collected in some detail an old post on an old blog, but I’d like to provide an updated and shorter presentation here.

How not to pick grad schools

The first thing you should know: grad school is not one-size fits all. There’s no clear hierarchy of programs. Your mother might want you to go to a big-name Ivy League university, but that is irrelevant unless that university has a strong program in your field. You have no obligation to go to a program just because the university is ‘more prestigious.’ You are judging particular programs (maybe even a particular advisers) and what matters most is finding a place where you can do good research and set yourself up for the next stage of your career. So unless your mother is a professor in your field, do not listen to what she says. (Unless it is ‘I love you’ and ‘I’m proud of you.’)

Similarly, let’s settle this right now: it does not matter what the climate is like or how big the city around the university is. Your job is to find a place where you can do exceptional science and if that means that for a few years you have to live outside your comfort zone, then so be it. (Besides, as a senior in college I’m not convinced that people even know what their ‘comfort zone’ is. You might be surprised.)

Gather the right information

Rule number two: visit each school and talk to as many people as you can. (They won’t mind too much if you skip all of the tours to talk to people in your field.) Most importantly, speak directly to any potential advisers. There are a few important questions that you should always ask faculty and their current grad students:

  1. How are students paired with faculty? What is the likelihood that you will be able to work with the faculty that you want?
  2. How often does each professor talk to his/her graduate students? Do the grad students play central roles in the group, or do they follow their faculty?
  3. What kind of funding does the group offer? How much will you have to teach, how many semesters will the group support you without having to teach? (This is especially important in theoretical physics.)
  4. How have the professor’s past students done? Have they found good postdocs and gone on to faculty jobs?
  5. What are they working on? Note: you should already have a good idea about this based on databases like SPIRES (for particle physicists).

Question #1 is especially important in theoretical particle physics where groups tend to be smaller. Having verbal assurance that an adviser will take you goes a very long way. I’ve seen too many students chose a grad program where they thought they could work with Prof. Y but then ended up having to find a back-up plan because that professor didn’t take any students that year.

Find the right fit: it’s all about you

Rule number three: figure out what kind of students benefit the most from each program, and decide if you match the profile. Some schools do an excellent job with preparatory coursework, but this would be very frustrating for students who already have a strong course background. On the other side of the spectrum, some schools expect students to be very independent from the very beginning, which may frustrate students who could use more mentoring early on.

Here’s what’s difficult: suppose you are choosing between two universities, X and Y, which have strong departments in your field. You think that X would provide the support you need, but Y is more prestigious and tends to do well placing its graduate students. You worry that going to X will reduce your chances of getting a good postdoc.

It won’t. Trust me. I’ve seen too many good students who have become frustrated at top-name schools because the program wasn’t the right fit for them, and I’ve seen just as many excellent students who have done exceptionally well after going to a lesser-known school with a program that was just right for them.

Evaluating advisers

How do you know which adviser is right for you? This is also a very personal choice.

  • Do you need someone with a more hands-on approach, or someone who can ‘point you in the right direction’ and let you explore? [If you haven’t done research before, then you probably want someone hands-on.]
  • Is the professor working on something you are interested in? (You should already have a good idea of what you are interested in!)
  • How have their past students done? How are they as an adviser? A Nobel laureate might be great for a letter of recommendation, but that doesn’t help if s/he isn’t there to help you develop into a good scientist as well.

You might want to think about how active an adviser is (this is correlated with age), whether there are external factors (faculty with young children have less time), and what kind of relationship you want with your adviser (research only, or chummy buddies). If you’re not sure how to evaluate potential advisers as scientists, the best people to ask are the faculty at your current university.

Let me emphasize once again: a personal assurance that you can work with a particular faculty member goes a long way. You do not want to end up at a university where none of the faculty have room for another student in your field.

More advice is good advice

Anyway, hopefully these paragraphs can help get the ball rolling. Probably the best advice I can give is to solicit advice from as many relevant sources as possible (especially faculty at your university) and figure out which is most relevant for you.

-Flip for US/LHC blogs.


Let’s draw Feynman diagrams!

Sunday, February 14th, 2010

Greetings! This post turned into a multi-part ongoing series about the Feynman rules for the Standard Model and a few of its extensions. I’ll use this first post as an index for all of the parts of the series.

  1. Let’s draw Feynman diagrams! (this post)
  2. More Feynman diagrams.
  3. Introducing the muon.
  4. The Z boson and resonances.
  5. Neutrinos.
  6. The W boson, mixing things up.
  7. Meet the quarks.
  8. World of glue.
  9. QCD and confinement.
  10. Known knowns of the Standard Model. (summary)
  11. When Feynman Diagrams Fail.
  12. An idiosyncratic introduction to the Higgs.
  13. A diagrammatic hint of masses from the Higgs
  14. Higgs and the vacuum: Viva la “vev”
  15. Helicity, Chirality, Mass, and the Higgs
  16. The Birds and the Bs
  17. The spin of gauge bosons
  18. Who ate the Higgs?
  19. Unitarization of vector boson scattering
  20. Private lives of Standard Model particles (summary)

There are few things more iconic of particle physics than Feynman diagrams. These little figures of squiggly show up prominently on particle physicists’ chalkboards alongside scribbled equations. Here’s a ‘typical’ example from a previous post.

The simplicity of these diagrams has a certain aesthetic appeal, though as one might imagine there are many layers of meaning behind them. The good news is that’s it’s really easy to understand the first few layers and today you will learn how to draw your own Feynman diagrams and interpret their physical meaning.

You do not need to know any fancy-schmancy math or physics to do this!

That’s right. I know a lot of people are intimidated by physics: don’t be! Today there will be no equations, just non-threatening squiggly lines. Even school children can learn how to draw Feynman diagrams (and, I hope, some cool science). Particle physics: fun for the whole family. 🙂

For now, think of this as a game. You’ll need a piece of paper and a pen/pencil. The rules are as follows (read these carefully):

  1. You can draw two kinds of lines, a straight line with an arrow or a wiggly line:

    You can draw these pointing in any direction.
  2. You may only connect these lines if you have two lines with arrows meeting a single wiggly line.

    Note that the orientation of the arrows is important! You must have exactly one arrow going into the vertex and exactly one arrow coming out.
  3. Your diagram should only contain connected pieces. That is every line must connect to at least one vertex. There shouldn’t be any disconnected part of the diagram.

    In the image above the diagram on the left is allowed while the one on the right is not since the top and bottom parts don’t connect.
  4. What’s really important are the endpoints of each line, so we can get rid of excess curves. You should treat each line as a shoelace and pull each line taut to make them nice and neat. They should be as straight as possible. (But the wiggly line stays wiggly!)

That’s it! Those are the rules of the game. Any diagram you can draw that passes these rules is a valid Feynman diagram. We will call this game QED. Take some time now to draw a few diagrams. Beware of a few common pitfalls of diagrams that do not work (can you see why?):

After a while, you might notice a few patterns emerging. For example, you could count the number of external lines (one free end) versus the number of internal lines (both ends attached to a vertex).

  • How are the number of external lines related to the number of internal lines and vertices?
  • If I tell you the number of external lines with arrows point inward, can you tell me the number of external lines with arrows pointing outward? Does a similar relation hole for the number of external wiggly lines?
  • If you keep following the arrowed lines, is it possible to end on some internal vertex?
  • Did you consider diagrams that contain closed loops? If not, do your answers to the above two questions change?

I won’t answer these questions for you, at least not in this post. Take some time to really play with these diagrams. There’s a lot of intuition you can develop with this “QED” game. After a while, you’ll have a pleasantly silly-looking piece of paper and you’ll be ready to move on to the next discussion:

What does it all mean?



Promoting science doesn’t mean dumbing it down

Thursday, January 28th, 2010

I recently found myself spending a lot of time thinking about science outreach and so was particularly tickled by an article in The Onion about the dumbing down of science. The Onion, of course, is “America’s finest [satirical] news source.” Included in the piece:

Sources pointed to a number of proposed shows they’ve abandoned in recent weeks, including […] Atom Smashers, a series that was was roundly rejected by focus groups as being “too technical” and “not awesome enough.” “People liked that the particle accelerators were really huge, but apparently the show didn’t have enough smashing to hold their interest,” said a former employee.

I don’t own a television (is that weird?) so I don’t really know what programming is like on the Science Channel, but as a particle physicist I am often confronted with the question of how to explain my research to the public in a way that does not speak down to either the audience or the subject.

It is true that high energy physics isn’t a field which most people have everyday contact with, but this doesn’t mean that the material needs to be “dumbed down.” While the material might be unfamiliar to the audience, it is [very] wrong to assume that the audience is somehow incapable of understanding the material. In fact, it is the fault of the scientist if the audience unable to understand the material since it is part of the scientist’s responsibility to translate their technical work into something accessible to a broad audience without compromising scientific integrity.

This is not easy (though we here at US LHC are doing our best!) and there is a delicate balance between

  1. Conveying a sense of scientifically-established ‘truth’ rather than facts that people should take on faith (very unscientific!)
  2. Tailoring this argument to the interests, background, and patience of the audience
  3. Simultaneously conveying one’s personal excitement for the field.

The joke that I always keep in the back of my mind before presenting ideas to a non-technical audience is the story of an old man talking to the engineer of a steam locomotive.

The engineer does a very good job of explaining how coal is burned to boil water into steam which is then used power a system of pistons that cause the wheels to turn and the train to move forward. He explains the conversion of chemical energy to kinetic energy and the mechanics of the various valves and rods.

Eventually, the old man interrupts him and says, “Yes, yes, I understand all that. What I want you to explain is where you hide the horses.”

Flip, US LHC Blog


The game theory of the postdoc market: why today is a stressful day

Wednesday, January 6th, 2010

If you know any particle theory graduate students who have applied for postdoctoral positions this year, today might be an especially stressful time for them. While this is still a couple of years away for me, I’ve been watching with fascination as many of my friends and colleagues go through this process. [Note that while this holds primarily for the particle theory community, I imagine a similar process occurs for other fields.]

A post-doc is a 2 to 3 year academic position in between graduate school and an assistant professorship. It’s a time to develop one’s independent research without the teaching requirements of a faculty position. Postdoc applications are typically sent out in fall and offers start trickling back in December.

In a month or so things will be all sorted out, but early January is when applicants get to see how the sausage is made, so to speak. The process can be a bit rough primarily because of the small size of specialized research communities like particle theory. Unlike undergrad admissions where there are thousands of accepted applicants for a flexible number of positions, most research groups can only hire a few postdocs (often just one) and have no wiggle room. This means that if there’s only funding for one job, a group cannot afford to make multiple offers at a time because it would be a disaster if more than one person accepted. Making postdoc offers becomes a non-trivial multiple-stage process that requires some strategy.

To keep make the playing field fair to the applicants, just about all departments have agreed to the particle theory postdoc deadline agreement, which states that no offer can be made that requires a response before January 7th. (That’s tomorrow!) This is effectively a deadline for the first round of offers and protects applicants from offers that try to force a commitment before other universities can make offers.

But now there’s still a lot of ‘game theory’ involved in the process. As is often the case in theoretical physics, a simple “toy model” is sufficient to demonstrate the phenomenon. Suppose you have postdoc applicants Alice, Ben, and Chris,  and departments at X, Y, Z. For simplicity let’s assume that these lists are ordered by status: A > B > C and X > Y > Z. Thus universities want to hire A while applicants want to go to University X. Let me pause and say that this is a gross simplification: usually rankings depend on particular research interests and can be confounded by all sorts of external factors (e.g. spouses).

So here are some examples of what could happen:

  • Scenario 1: Every department makes an offer to Alice, so Ben and Chris have to wait until after the first round deadline to get offers. They’re sweating bullets because they don’t know if they’ll have a job next year. Alice ends up going to X, and then by a similar process, Ben ends up at University Y in the second round, and finally Chris ends up at University Z in the third round.
  • Scenario 2a: Now consider the case where University Z wises up a little. They know that applicant Alice is out of their range, so instead of making a first round offer to her, they go straight to Ben. Now in the first round Alice can choose between Universities X and Y, but Ben has an offer from University Z and no longer has to worry about getting a job. Now Ben has until the first round deadline to accept Z‘s offer. He thinks that maybe he’s good enough for University Y, but he can’t be sure and he doesn’t want to gamble with his career so he accepts Z. In this scenario, then, the third-ranked university was able to snag the second-best applicant. We’ll say that University Y “fell through the cracks.”
  • Scenario 2b: could have turned out differently: Maybe Alice decides immediately that she wants to go to X. Then she can inform Y ahead of time (though she’s not obligated to do so) that she’s taking another offer, and Y can move on to make an offer (essentially a second-round offer) to Ben before the first-round deadline. In this case we end up with the same matching as Scenario 1. Note that it is often the case that it’s very difficult to choose between X and Y, so Alice ends up using the entire first-round period to mull over her choices and this secnario has to happen at the last minute (e.g. the day before the deadline — today!), or might not happen at all.
  • Scenario 3: another twist: This time, maybe University Y decides that it wants Chris (for one of multiple valid reasons) and University Z has  ‘resigned’ itself to being the third-best choice, so it cuts straight to the chase and also makes an offer to Chris. (e.g. so that the fourth university, W, doesn’t steal him like scenario 2a.) Now Alice goes to X, and Chris can choose between Y and Z, but Ben has no first round offer, even though Z would have been happy to have him. In this case the second-best applicant has fallen through the cracks. [e.g. maybe he is forced to accept a first round offer from the fourth best university.]

Now you can see how this can get a lot more complicated. There are maybe a hundred or so applicants and maybe a few dozen universities. You can expect that top-tier universities will target top-tier applicants and so forth, but it’s not clear where the boundaries are and it’s not clear who falls through a crack (as Ben did in scenario 2a). Maybe Alice does string theory while Ben does LHC physics and the top universities are currently looking for LHC physicists. Or maybe Alice has a spouse who refuses to live in X, so she won’t consider their offer. Maybe a university has multiple postdoc positions so they can afford to be more ambitious. At the end of the day, it gets really complicated.

What practically happens is that there are a lot of telephone calls in December as faculty members in charge of their local postdoc search call up their colleagues to ask about their students who are applying. (Like I said, it’s a small community.) These are usually to further assess how well that student would fit as a postdoc and how likely it is that the student would accept an offer were it made. Then by late December the first round offers are made and the lucky students have until January 7th to accept an offer. Often departments will inform students if they’re short-listed, which partially means that they’re waiting to see how the market turns out before committing to making an offer. When a department hears from an applicant that they’ll politely turn down an offer, they can immediately go to the next person on their list, hoping that this person hasn’t already accepted elsewhere. As you can imagine, January 6th can be a bit of scramble as departments try to make offers before applicants are forced to commit to an existing offer. It’s been suggested that proper etiquette requires one to inform institutions as soon as possible about one’s decisions, or even that it is only reasonable to hold on to no more than two offers, but currently such things are completely voluntary and nobody wants to decline an offer unless they’re 200% sure that something won’t change their mind down the road.

Since a solid postdoc is one of the keys to proceeding onward to a faculty job, this can be an extremely tense time for young scientists. (It’s actually a very good thing that they have the winter break to be with friends and family during this period.)

There have been two recent developments in the postdoc market that have changed the game a little bit. The first one is the existence of an unofficial postdoc “gossip” page where postdoc offers can be self-reported. It is the only way to get a semblance of the status of the postdoc market. I have to admit that I keep up with this the same way that basketball fans keep up with the NBA draft.

The second development was just rolled out this year, a centralized system called AcademicJobsOnline to organize postdoc applications (“officially” endorsed by the HEP community). Like the Common Application for undergrad admissions, this makes it much easier for recommenders to upload one letter (instead of many dozens) and for an applicant to avoid filling out the same data on different forms. I’ve heard unofficially that this has led to a big increase in the number of applications to some institutions, which is something of a minor annoyance to prestigious institution but can be a big boon for ‘diamond in the rough’ departments in lesser-known universities.

As many of my colleagues bemoan the uncertainty of January 6th, there is another conversation which keeps popping up every year: why can’t we do things the way the medical doctors do? The National Resident Matching Program is a ‘magical computer program’ that matches med students to 25,000 residency positions. The system is a bit mysterious, but it pairs up students with a residency in a way that somehow maximizes the desires of the medical program and the applicant (after interviews). The general statement from the people who wrote the common deadline agreement is that the NRMP’s large administrative overhead makes it difficult to implement in academia.

While it’s always true that it’s hard to shift to a new system, there is certainly some merit to having some kind of matching algorithm where ranked preferences from institutions and applicants can be taken into account to make postdoc pairings. Because the theoretical physics postdoc community is so much smaller than the medical resident community (by factors of tens of thousands), I suspect the overhead can be significantly trimmed. The program could be written to simulate the process as it exists today, with institutions making offers and applicants choosing between them based on the preference lists. Multiple rounds of matching can be done automatically without the threat of “falling through the cracks.” This way applicants don’t have to feel like they’re having their choice taken away from them. Unlike the NRMP, preference lists and the computer code can be made to be completely transparent to ensure that there are no secret back-room deals. In fact, now that applications are beginning to be centralized through AcademicJobsOnline, there already exists a natural framework to implement such an automated system.

I’m a bit naive about these things, but the actual implementation seems simple: Applicant submit an ordered list of jobs and, afterward, institutions submit an ordered list of people they’d like to hire. Then what follows is an optimization algorithm that can be tuned depending how one wants to break “ties.” This requires some choices that the community has to agree upon, but it is still more reliable than of whether or not someone officially declines an offer before the Jan 7 deadline.

Every year this discussion must pop up at informal lunchtime at different universities, and every year people start out being very skeptical about radical changes… until January 6th, when the stress of the current postdoc market catches up to applicants and they worry that they might fall through the cracks (e.g. University Y in scenario 2a or Ben in scenario 3 above) and they wish that a more certain system were in place. Then a few months later everything works out, people are excited about their new jobs, and everybody forgets about the postdoc market again. Hopefully the community can work something out that avoids the shortcomings of the current system.

This post is dedicated to all of my friends who are holding their breaths for postdoc offers on this “day-before-Jan 7.” Good luck, everyone!

Flip, US/LHC


The Joy of Physics

Sunday, January 3rd, 2010

Dennis Overbye, the New York Times reporter with a physics degree from MIT and the newspaper’s local LHC expert, wrote a nice essay about the joy of doing physics. In it, he goes through the usual question of why anybody but physicists should care about the LHC. He goes through all the usual arguments that fundamental research in high energy physics has led to all sorts off spin-off technologies, including MRIs and PET scans that now play prominent roles in medical science. He goes on, however, to write

But better medical devices are not why we build these machines that eat a small city’s worth of electricity to bang together protons and recreate the fires of the Big Bang. Better diagnoses are not why young scientists spend the best years of their lives welding and soldering and pulling cable through underground caverns inside detectors the size of New York apartment buildings to capture and record those holy fires. They want to know where we all came from, and so do I.

And I think this is the point that people don’t seem to understand when talking to physicists. People don’t don’t devote their lives to fundamental research for possible spin-off technologies or fame or money. The reason that people do science is the unbridled curiosity to understand what makes the universe tick.

This doesn’t get said enough, probably because it sounds so corny, but there’s something very wonderful about being able to ask nature how it works. Physicists feel like there’s nothing more noble than this pursuit of scientific truth, and it says something bright about our society that it values these pursuits enough to support them.

Along with other bastions of culture such as the arts or history, our scientific progress — how we understand the universe — is an indelible part of who we are. In college I took a course on Meso-American Archaeology, and I would wonder, why would anybody today care about Mayan cosmology? (Other than desperate movie producers.) The answer is that it tells us more about the Mayans as a people and how they understood their place within existence. What will our science tell future societies about us?

For every day people, this is the beauty of understanding that protons are made of quarks or that a fantastic phenomenon called the “electroweak phase transition” occurred early in the universe: it enriches our lives by putting it in the ultimate context.

Unlike our grandparents (or even parents, for those a bit older), we can definitively say based on the scientific method that we are made up of mostly empty space, but the stuff that isn’t empty is a wonderfully complex scaffolding of atoms, which in turn are made up of a lattice of nuclei surrounded by a gas of electrons. These nuclei are held together by a force so strong that separating the components of a nucleon produces a shower of additional particles that weren’t actually “inside” the original object but are only created from the exchange of virtual (not quite physical) particles. These subatomic, despite being very small, played key roles in the development of the billions of galaxies in the universe when it inflated early in its lifetime: the little quantum fluctuations in the primordial plasma gave rise to the large scale structure we see in the sky. Further, the universe is still expanding today. So here we are: little carbon-based life forms on a chunk of rock that developed sentience only few million years ago, and we are able to know these magnificent things.

If that’s not simultaneously humbling and self-congratulatory, I don’t know what is.

Happy New Year everyone,
Flip, US/LHC blog


Who will pay for the arXiv?

Tuesday, December 29th, 2009

[Sorry if this is a little dry compared to my usual posts, but this is more of a news report for the HEP community.]

Last time I mentioned the INSPIRE system as an exciting development in high energy physics literature databases (no, that’s not an oxymoron). There’s another big change going on in that field next year, but this will be behind-the-scenes. None-the-less, it’s raised a lot of questions about the ownership and financial support of an important resource that is free to anyone in the world: the arXiv.

The e-print arXiv (pronounced “archive”) is a central repository of research articles in physics, mathematics, computer science, and quantitative biology. Since its inception in 1991 by theoretical physicist Paul Ginsparg, it has had a huge impact on the way science is done by providing free access to “pre-prints” of research papers. This meant that scientists from anywhere in the world with any background could access the latest research even if their university libraries didn’t have a copy of the particular journal in which it was published. This is a big deal since the cost of many of these journals created a gap between those institutions which could afford to pay for many journals and those which could not. In many ways arXiv “brought science into the 21st century” by allowing scientists to draw upon the collective scientific community more efficiently. Many credit it for pioneering the open access movement in scientific publishing.

But with increasing costs and the state of university budgets, the Cornell University Library (which operates the arXiv) is looking to find more cost-effective ways to support the arXiv and the much-needed overhauls in the software architecture (“arXiteXture”?). [Earlier this year Cornell closed its Physical Sciences library to help trim operational costs.]  Currently the Cornell library pays the $400,000/year operating cost to make the arXiv available free-of-charge to the rest of the world. Here’s the official statement so far:

Cornell University Library is beginning an effort to expand funding sources for arXiv to ensure its stability and continued development. We intend to establish a collaborative business model that will engage the institutions that benefit most from arXiv — academic institutions, research centers and government labs — by asking them for voluntary contributions. We are working with library and research center directors at the institutions that are the heaviest users of arXiv to refine our plan and to enlist support. We expect to release the plan, with a call for broader engagement and contribution, in early 2010.

There’s also a very handy FAQ on the funding changes, which are still a work-in-progress. Because the arXiv is such an important resource to a range of disciplines, the proposed changes have had some in the physics community asking whether it’s time to re-evaluate whether a single private library system should have ‘ownership’ of the arXiv as researchers contemplated the ‘nightmare scenario’ of the arXiv becoming a pay-to-use site. (Fortunately this is not the case.) Indeed, the arXiv has been instrumental in supporting research institutions that are unable to afford the costs of journals from for-profit publishers. The FAQ provides some insight about the direction that the arXiv managers are heading.

Currently the plan is to ask the “heaviest user institutions” (other university library systems) to voluntarily contribute to support arXiv operational costs. The FAQ states that the library has already secured commitments from 11 of the 20 institutions that make the most use of the arXiv. (I’ve seen an unofficial list; these include many of the ‘big name research institutes’ around the world.) In return, besides academic karma, these institutions will be recognized for their support with arXiv banners and would possibly be privy to more detailed arXiv usage statistics. The target appears to have such contributions support a fraction of the operating budget.  There is no plan to charge individuals for uploading or downloading papers from the arXiv. This business model is meant to be a temporary plan for the next three years while a longer-term solution can be figured out in collaboration with the wider community. It seems like the arXiv managers envision this long term plan being some kind of mixture of Cornell and user-institution support, but they are open to external support, e.g. from the National Science Foundation (which many physicists have suggested).

Just before the winter break the arXiv managers had meetings with the Cornell physics department to discuss the future changes to the arXiv. Unfortunately I was unable to attend that meeting because I was already back in California to spend the holidays with my family (… and to have transcontinental Skype conversations with my collaborators), but you can expect an official public announcement about the new arXiv program from the Cornell Library this coming January.



HEP literature databases to be ‘INSPIRE’d in 2010

Wednesday, December 23rd, 2009

While this won’t catch the as much press as the LHC’s upcoming steps towards a physics run, but there are big changes coming up in 2010 to the way high energy physics literature is organized. This is very important: the vast databases of physics literature available at our finger tips through the Internet are what separate us from the cavemen. (Er… something like that.)



Speaking of cavemen, those familiar with the history of the web at CERN will not be surprised to find out that the first webpage hosted in North America was a particle physics literature database, SPIRES, operated by the SLAC National Laboratory. The database allows anyone in the world to look up bibliographic data about a range of documents including items that aren’t journal-submitted papers: PhD theses, conference talks, technical notes, and even video recordings.

CERN has its own library management system, Invenio, whose killer application is the fantastic CERN document server. The CDS has a broad collection of materials, including a nice set of general audience videos that readers of this blog might like. The two systems have their own strengths: SPIRES is known for its ability to work with metadata while Invenio’s architecture is known for its scalability and performance. So, after a survey of high energy physicists, it’s no surprise that SLAC and CERN (along with Fermilab and the German HEP lab DESY) combined their resources to implement SPIRES “user-level functionalities” within the Invenio framework.

Screen shot 2009-12-22 at 8.55.11 PM

The resulting combined project was christened INSPIRE and the plan is to have a user release sometime next year. INSPIRE aims to produce a unified, modern HEP database that’s not based only on papers (and more recently recorded talks), but more generally information. This includes computer code, data, and figures. Instead of just searching, it also aims to tap into the potential of the Web 2.0 by implementing a rating system, following individual users, and even tracking data usage.

[Now for some shameless self promotion: these are all functions that I wrote about two years ago, likening them to link aggregation sites like Digg, e-commerce a la Amazon, and the grand-daddy of Web 2.0: Google itself.]

You can read more about INSPIRE through at the CERN Courier, interactions.org, Symmetry Breaking, a talk to the DOE High Energy Physics Advisory Panel,  and through some talks at the HEP Information Resource Summit. At that last link check out Travis Brooks’ demonstration of INSPIRE, or better yet, try out the alpha version yourself.



Theorists gone wild: CERN-TH Christmas Party 2009

Saturday, December 19th, 2009

It’s that time of year again. This past Friday the CERN theory group had its annual Christmas party, featuring its unique brand of silliness: the CERN-TH Christmas play. I’ve not yet had the privilege to visit CERN, but one of my deepest physics desires is to one day be around during one of these parties. Recently the group started archiving their Christmas plays and making them available online. Here’s a summary of the 2008 play, courtesy of Jester at the Resonaances blog (curiously Jester wrote the year incorrectly).

The 2009 play can be found on the CERN Document Server. I wholeheartedly recommend it. It’s full of jokes about LHC media hype, pop culture, and yes, a lot of physics. The puns are packed in there rather densely, so those of you that can pick up most of the references in one viewing should consider trying out for Jeopardy. I prefer to make a game out of it so I eat a cookie for every time they mention a physicist whom I’ve met in person. Even if you don’t get any of the jokes, it’s still enjoyable to watch physicists having fun being silly.

My rough notes of the references in the play are below, after the break. (There are certainly a bunch that I’ve missed or misinterpreted.)



LHC #9, poised to take #1 soon?

Monday, December 14th, 2009

The successful restart of the LHC ranks #9 on Time magazine’s list of the top 10 scientific discoveries of 2009. That’s not bad considering that the LHC only had its first collisions last week and is still some time away from having the integrated luminosity to make big discoveries. Despite this, the LHC has set new records for the highest energy particle collisions made by human kind and it was no small task to get this far.

If everything goes smoothly, we’re looking at 3.5 TeV per beam collisions in 2010, maybe going up to 5 TeV. High energies are sexy and look good for the press, but discoveries are all about finding an excess in the rate of some process (as we discussed in an earlier post, also Regina’s latest). In order to observe this excess, we need lots of data. Why is this? Suppose you wanted to know if Kobe Bryant or LeBron James has a higher shooting percentage. After just a few games, you could look at the stats but they would be difficult to trust: maybe one player had an off day, etc. But over the course of the entire season, the accumulated stats become more trustworthy.

Particle physicists measure how much data they have in “inverse picobarns.” After next year the good folks at the LHC expect to have a couple hundred inverse picobarns of data. By comparison, the Tevatron at Fermilab has recorded something on the order of inverse femtobarns, i.e. thousands of invese picobarns of data. That’s around the ballpark (conservatively) where physicists can really start looking for the subtle hints that exotic particles have been created.

What does this all mean? Well, it means that unless nature is very kind 2010 might still be a bit early for “paradigm shifting” discoveries. I should mention two things: (1) people are keeping their eyes out in case nature is this kind and (2) there’s still a lot of very important science to be done in this period (e.g. top quark mass measurements).

After 2010 the LHC will have a “long” shut down to prepare to ramp up to 7 TeV per beam collisions. That’s when the machine will really ramp up its search for things like supersymmetry, extra dimensions, dark matter, and the Higgs (if we don’t discover it sooner). Then the LHC can aim for #1 on Time magazine’s list of scientific discoveries.

[If any of my fellow US/LHC bloggers have more updated information about 2010 expectations, please correct me!]



The double slit experiment: summing over paths

Friday, December 11th, 2009

Hi everyone. With lots of exciting successes with the LHC startup, I thought it would be good to teach everyone a bit about Feynman diagrams. These are the funny squiggly lines that one will often see on particle physicists’ chalkboards (or whiteboards if they do experimental physics…) that describe what’s going on when particles interact. The diagrams are very simple to draw and can actually be interpreted very straightforwardly, but like many things there’s a lot of very elegant physics going on “under the hood.” Thus, in order to build a bit of foundation for that, I’d like to discuss something even simpler: the double slit experiment.

Actually, we’ll do even more: we’ll do the triple, quadruple, and infinite-slit experiment! Take that quantum mechanics textbook!  I’ll then discuss why this is the basis for Feynman diagrams. This discussion (and the images) borrows heavily from Tony Zee’s excellent textbook, Quantum Field Theory in a Nutshell (there will be a new edition next spring).

I’m not going to discuss “wave-particle duality,” the idea for which the double split experiment is often invoked. Those who are unfamiliar with this can look it up in popular physics books or on Wikipedia, but it won’t be necessary for our purposes. In a double-slit set up, a photon travels from some point A to some point B. In between those points however, is an impenetrable barrier that has two slits (S1 and S2) cut into it, allowing the photon two paths to get to the point B.

doublesplitYou might ask about why the photon’s path has a kink in it at S1 and S2, since it seems strange that it takes a bent path. My answer: don’t worry about it, this is quantum mechanics: weird things happen. More precisely, the probability for the particle to go from A to S1 to B is a well defined and non-zero quantity.

The point of the set up is this: we’ve constructed a system with a well defined initial state (photon at point A), a well defined final state (photon at point B) and well defined intermediate states:

  1. The photon goes from A to B via slit S1 or
  2. The photon goes from A to B via slit S2.

The rules of quantum mechanics tells us that if all we can measure are the initial and final states, i.e. we can’t tell which intermediate process occurred, then the physics of the process is determined by the “sum” of both possible intermediate states. Now you might ask what I mean by “sum.” Without going into the details, quantum mechanics assigns a [complex] number to each intermediate process. By summing these numbers for unmeasured possible intermediate states we get a number associated with the  entire process. We call this number the probability amplitude. It’s not important how we determine these numbers; what is important is that the square of the amplitude is the probability for the initial state to turn into the final state, i.e. a rate that we can measure in the lab. This is the actual “result” of the double slit experiment, though the actual experiment isn’t important for us right now! All you need to understand is that these paths are summed.