• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Nicole Ackerman | SLAC | USA

View Blog | Read Bio

Data: Women in Physics

This is a response to Lucie’s post “powerchick”.

It is true that Poisson statistics can make it difficult to tell when a situation involving few people agrees with a “null hypothesis”. I’ve done a calculation like this once when teaching an undergraduate class. The class was not very large (~40) and it was about 30% female. After a few weeks, many people had dropped. Looking at the names, I noticed that a higher percentage of female students had dropped the course. Before getting too concerned I calculated out the numbers. My null hypothesis was that the percent of male students who dropped the course was the “normal” percent of students (male and female) that would drop the class. Then I could calculate what percent of the time I would see this number of women (or more) drop the class, not based on a gender bias, but based on statistical fluctuations. I found that the answer was between 30 and 40% (I don’t have the numbers any more). Had the number been 5%, the idea that there was something extra making women drop the class would be reasonable. But really, the effect was just statistical fluctuations.

HOWEVER – had I taught the class over many years, I could have gathered greater statistics. If in every case slightly more female students dropped than expected from the male students dropping rate then I could see a statistically significant effect over time, though not in each measurement. This happens in physics when there are few events. One great example of this is the search for solar neutrinos in the Homestake Mine (data shown below). Each measurement has very large error bars, such that the data could not really determine whether the prevailing model was correct or not. But when you look at the data over multiple measurements, the error bars are reduced (see the point to the right) and it becomes clear that the data does not match what theory had predicted.

When dealing with issues regarding women and racial minorities in the sciences (and especially physics), the numbers are so low that evidences of bias or discrimination will often be dismissed as statistically insignificant or “anecdotal”. But when (here in the US) less than a dozen African-Americans get a PhD in physics a year, or graduate schools see 3 women enter their physics PhD program, the issue cannot be ignored until the data becomes large enough to see something statistically significant. Many patterns can be seen – perhaps not within a certain school or experiment – that point to issues of bias in physics. I will create a post later with links to some of the studies.

When the sample sizes are small, we must keep in mind that if the null hypothesis was “women are being treated differently” we wouldn’t be able to disprove it either.

Solar Neutrino Data from Homestake Experiment

Solar Neutrino Data from Homestake Experiment

Share