Friday, November 11, 2011

-- More Stuff --

-----------------------------------------------------------------

'Hat tip' to Mark Michaels for directing me to these 2 web pages:

http://eco.confex.com/eco/2011/webprogram/Paper30214.html

http://www.wildernesscenter.org/podcasts/default.aspx?a=152&c=0&k=

The first is simply an abstract on the results of the official IBWO search in South Carolina from Matthew Moskwik -- nothing new here, and I don't know that the full report is accessible anywhere on the Web(?) but still worth a glance.

The second is a recent podcast interview with Tim Gallagher from the Wilderness Center site, primarily on the recent Imperial story (starts at about the 24-min. point). Gallagher doesn't give a lot of interviews, and this is probably the best one I've yet heard from him (~35 mins.).

I'm playing with a possible blurb loosely about "null hypotheses" which may or not ever get posted, but for the few who might be interested I'll throw out a couple of other statistically-oriented posts (will bore most, and only of tenuous value here):

http://scienceblogs.com/principles/2011/09/statistical_significance_is_an.php

http://blogs.discovermagazine.com/gnxp/2011/11/the-problem-of-false-positives/

The first is an old post from physicist Chad Orzel on "statistical significance" and the second a more recent and more technical post from Razib Khan on research false positives. (I only throw these out because they border on why I'm leery of all statistical discussion in regards to a topic like the IBWO -- discussion of null hypotheses can be tricky at best and disingenuous or misleading at worst -- my first stat professor in grad school blanketly told us to distrust ALL discussion of statistics in journal articles that wasn't carried out by Ph.D.-level statisticians, because most others misapply (or misinterpret) stats! -- though maybe things have improved in the last 35 years since then).

And of course feel free to continue any further discussion of matters in the prior post as well.

ADDENDUM: Tim Gallagher's Imperial story also made NPR today (part of all this sudden attention is no doubt due to Gallagher working on a book on the species… I don't mean that cynically, just that if you have a fascinating story at your fingertips you naturally want to publicize it as much as possible). The segment from "Science Friday" is here (just click on the video to hear the audio):

http://www.sciencefriday.com/videos/watch/10414
---------------------------------------------------------------

1 comment:

cyberthrush said...

Well, that was quick -- a professor has already sent me "a few thoughts" about stats/null hypothesis testing (and obviously I concur with his thoughts about 'sample size').
He wishes to remain anonymous, but okayed putting his comments, as follows, up here (probably just for the math-geeky):


1 - In my experience, PhD statisticians know the theory but can't apply it. Period. But there is a bigger problem for statistical significance, or null hypothesis testing (NHT), that the first article you linked to hints at but fails to identify. NHT depends on four critical factors that influence the decision to accept or reject a null hypothesis: 1) sample size; 2) difference(s) between or among means (i.e., effect size); 3) variance; and 4) chosen alpha level. The single biggest problem with NHT, and why many are calling to eliminate it, is sample size. A small, meaningless effect can be significant with a large sample size, and a large, meaningful effect can be non-significant with a relatively small sample size. What does this mean? Often, NHT says more about sample sizes than it does about the biological relevance of the data! All biologists should seek to find meaningful results, not significant results, but this unfortunately is not the case with present-day NHT conventions. Statistical significance utterly fails at revealing meaningful results, whereas practical significance--reporting the effect size or magnitude--is ideally suited for doing so. This is why many journals, with the social sciences (psychology especially) and medicine leading the way, now require authors to report effect sizes in addition to or in place of NHT. A well-established literature exists on the appropriate measures of effect size for various experimental designs and inferential analyses. Today, many practicioners are advocating complete abandonment of NHT (which I do not entirely support). Ironically (getting back to your comment, CT), many professional statisticians now recognize this. And clinical trials--for which there could hardly be a better example of the need to identify meaningful results--are now assessed almost exlusively by effect magnitude. Someday ornithologists and other zoologists will catch on.

2 - With a proper understanding of NHT, the problem of false positives is not a particularly relevant issue. With the exception of occasional proportional data (e.g., two groups having a response of exactly 23.0%), the null hypothesis is NEVER true. When you compare the winglength, for example, of two species of birds (e.g., Brown-headed vs. Pygmy Nuthatches), one will always be incrementally larger than the other (e.g., 67.137 vs. 67.133 mm). The appropriate question is not whether the null hypothesis is true, but whether the magnitude of the difference is meaningful or relevant. I should add that the second article you linked to suggesting N = 20 for each cell is based wholly on NHT statistical power, and fails to acknowledge ethical considerations when it comes to the use of animals and other difficulties associated with acquiring "adequate" samples in field studies. Again, NHT provides the wrong metric.