Early on in my student career, I learned that a favorite excuse of physicists is that we are beholden to take “whatever works” in the task of describing Nature. Sure, sure, we do give some degree of greater weight to “beautiful” theories (some of us moreso than others), but on the whole we are a culture of pragmatists. We tease our mathematician compatriots rather a lot for lack of proper respect for a correspondence of theory to reality. And in front of an experimental physicist audience, it really is best not to talk too loudly about things that cannot be proven one way or the other by observation.
But don’t let the simplicity of “whatever works” fool you. I can’t imagine that any good physicist would actually be happy to accept and/or use a method that just so happens to predict reality, without some kind of deeper understanding of why it works, how far it can be pushed, and where it can break down. In fact I would dare to say that in the majority of experimental physics analyses, only a small fraction of the effort goes into performing the measurement itself, and the rest is spent chasing down and quantifying all the uncertainties one has to acknowledge about various aspects of the method.
What I find quite amazing is that every generation of physicist manages to be trained in the scientific method almost entirely by example. In Math one is taught the rules of deduction, and there is little ambiguity in what constitutes a valid mathematical proof starting from the axioms and down every link in the chain onwards to the hypothesis. There is no such uniformity in Physics; in fact, there may not even be any requirement that a physics graduate have any knowledge of formal logic. Now, I don’t mean to say that the Quod Erat Demonstrandum structure of Math is or should be equally applicable to Science. While one can use “A→B” in Math to show that B is true (after having shown that A is true), the same statement in Science is only good to whatever uncertainty — and, more importantly, whatever other “hidden” variables went into controlling the behavior of A, B, and A→B. If scientists tried to achieve as much formal rigor in their reasoning as mathematicians do, I’m not sure we would be able to make appreciable progress. And this is where another “whatever works” often comes in — even if unable to fully list and/or address all the issues in the scientific chain, one can hope to do something reasonable, and then check the answer against some kind of control region (a.k.a. lab test) in order to argue that it should work in the region of interest.
If you think that the above paragraph is cryptic and rambling, imagine trying to convey the hows of this thinking process to a student fresh out of classes, where all theories and experimental results have been neatly laid out, and even with exactly the required pieces of information, no more, no less. The actual process of research is nothing so linear or finite. A good researcher has to keep an eye on almost everything, because it is anybody’s guess as to what is necessary, important, or prone to breaking. And just because “whatever works” happens to work, does not mean that one has demonstrated that it is not a lucky accident. And unfortunately, it seems like the burden of scientific proof is one of the most difficult things to learn in this apprenticeship.
[ On the very incomprehensible drawing : The grey blocks are a histogram. The red things are for you to interprete. The background is supposed to be waves, but apparently those are way outside of my technical ability. ]