Archive for January, 2010

The inevitable post about Haiti

Posted in media, News-related, politics, rant with tags , , , , , , on 27/01/2010 by sangomasmith

Even in the sunny south of the world, we still get the news. We also get the shit that goes with it.

Continue reading

Before plunging into seriousness…

Posted in media with tags , on 27/01/2010 by sangomasmith

Here’s a lighter look at why the internet sucks. Thank you Cracked…

Ugh

Posted in admin stuff on 16/01/2010 by sangomasmith

I should mention at this point that my internet access is… more limited than usual. Not that I’m exactly a frequent poster anyway…

On the joys of pilot studies

Posted in Crop science, Science with tags , , , on 16/01/2010 by sangomasmith

Another year looms.

Luckily, I worked for most of my holiday to prepare for it. Specifically, I spent until the day before Christmas looking at sick plants. Now, joy of joys, I have returned from my delayed festive season to find… a very iffy data-set.

As anyone who has done stats knows, size counts. Specifically, size (in this case, the number of independent samples you can draw upon to construct your data-set) allows you to expunge random errors in technique, to draw statistically valid conclusions from the messy reality you’re trying to understand. A large sample size covers up a veritable multitude of sins.

Unfortunately, ny experiment lacked a large sample size. And, as is usual in biology, the plants did not cooperate. The end result, of course, is that my data (although hinting that I’m on the right track) is like an obstructive bureaucrat: It can neither confirm nor deny.

The small-scale trial effect is well known in research. Often, it has the effect of magnifying the difference between two samples by simple dint of random error. If the samples are large random error will tend to even out as all the variations cancel each other out. A small sample pool, however, allows a few rogue results to completely change the apparent outcome of the trial. The end result is that people get excited by the huge changes they see only to be disappointed when larger trials reduce them to insignificant effects.

Of course, It’s never just the raw data that makes an accurate trial. Just as important is the structure of the trial itself. Mine had all the right moves (positive and negative controls, a neutral sample for comparison, elimination of environmental variables etc.) except for one rather significant flaw. My testing system was subjective and non-blinded.

What this means is that my method of assessing each sample relied on personal judgement (at least, to a limited extent) rather than objective measurement. This, coupled with the fact that the person doing the assessing (me) knew what the subjects were and had a stake in a certain outcome for the trial, meant that the results were almost certainly skewed towards what I wanted rather than what actually happened.

The end result of all of this is a data-set that looks wonderful in a graph but fails the rigorous test of science. I’m sure, on a personal level, that the effect is real. But I wouldn’t convince anyone with my evidence.