Thursday, May 16, 2024

-- An Example Worth Knowing --

 ————————————————

Medical test diagnoses are just one example (but an important one) of how difficult or tricky it is to understand logical probabilities; in fact it’s well-known that doctors themselves often fail in their understanding of medical test results. One, of many, common examples often given in chapters on Bayesian or conditional statistics runs along these lines:


Suppose 1% of 40-year-old women have breast cancer. And suppose a certain mammography machine correctly diagnoses breast cancer 90% of the time (i.e., IF a woman has breast cancer there is a 90% chance the machine will say so). Suppose the same machine has a 10% chance of giving a false-positive — it says a woman has breast cancer but she does NOT. 

Suppose now, Mary, a 40-year-old woman, goes in for a regular mammography screening and the machine indicates she has breast cancer. What is the probability that she actually does?

People (including doctors) often think there must be close to a 90% chance she is afflicted with cancer. In actuality, it is closer to an 8% chance!

The initial math is not all that difficult:

1% of 40-year-old women have breast cancer, so out of say 1000 such women who go in for testing, ~10 will actually have the disease, on average, and 990 will not.

The machine has 90% accuracy so of those 10 with cancer the machine will diagnose 9 of them correctly (but miss one).
Of the 990 without breast cancer the machine will yield a false-positive on 10% of them, or 99 women.

Thus, out of 1000 original women tested, a total of 108 (9 + 99) will test positive for breast cancer, though only 9 will actually have it.

9 out of 108 (positives) turns out to be a final accuracy rate for a positive result of ~8.3%. THAT is the likelihood, from this one screening, that Mary truly has breast cancer (it is unfortunate how much fear and anxiety such tests automatically generate -- this goes for a number of other medical tests as well). [Note how things would change IF the machine gave NO false-positives (but all do).]

The above is just the basic, rough (but noteworthy) math of the given situation — there are plenty of other variables to consider: any known genetics of the patient or relevant past family history, or current pertinent physical or physiological results. But I’m using the example solely to portray how easily our common sense or intuitions mislead us. The problem is that people automatically assume a "90% accuracy" rate means that any given result has a 90% chance of being true, when in fact a larger context with more conditional factors must be brought into consideration when looking at any single case... and guess what, that's almost always true in life.————————————————

ADDENDUM:   For readers surprised by these numbers I ought further explain that this sort of confusion is commonplace for “screening” tests, which are employed to find candidates for further “diagnostic” testing or examination that is more specific for the condition being investigated (anytime a doctor orders a test for you always worth asking if it is a screening test OR a diagnostic test — perhaps most major ailments these days have both).


Tuesday, May 14, 2024

— More of Same —

 ————————————————

Jim Simons, one of the true titans of quantitative research, in mathematics and investment strategies, died a few days ago. Not too unexpected, at the age of 86 and a lifelong smoker, but still a tremendous loss of a brilliant thinker, creator, and philanthropist.  Not too relevant here, except that it comes at a time when I am bemoaning the ‘quality’ of research in the Ivory-bill arena (and elsewhere).


…and so more IBWO auditory analysis appears from John W. First off, will just say that if John honestly believes his “scientific” work is soooo excellent (including his South Carolina storyline) he really should get one of the 100s of university or PhD.-level biologists or ornithologists out there, to work with him as a co-author — it could boost his credibility, and such people are always looking for opportunities to publish fresh material… in fact such academics ought be knocking down John’s door hoping to collaborate on all this monumental study — except that I’m doubtful there is even one such individual in the entire country willing to associate with this work and add their name to it….


Won’t get into a long, hopeless back-and-forth about all the problems (mostly of weak assumptions, circularity, and skewed samples) involved in yet another spectrogram paper (or, will it be retracted?), but will say that the main conclusion I can draw from it is that there exists some creature (not even necessarily a bird) that exists in Florida woodland (that the data is heavily skewed to) and possibly other woods, that makes kent-like sounds with a 587 Hz harmonic (p.s... I don’t know how the specific subset of Florida datapoints used in the paper were selected out of the 100s actually recorded in the Choctaw.?). And perhaps ARUs run for a couple of weeks in Maine, Michigan, Canada, or Timbuktu, would also pick up such kent sounds? Who knows? John can speculate why that harmonic (from unknown creatures, no matter how much he wants to insist they must be IBWOs) is different from the only actual IBWO sounds that have ever been recorded (from the Singer Tract) or from the excellent work done by Cornell in the Big Woods, but no one really knows for sure.  A frequent assumption is made that the Singer Tract birds are giving different call notes because they are recorded while their nest site is being disturbed; certainly plausible, but it would be a stronger argument if data actually showed that the Hz of calls from other large birds (waterfowl, crows, raptors, even other woodpeckers) indeed change similarly when recorded at intruded nest sites versus in open field — perhaps that has already been done, or at least the data might be available, but I’ve not seen it. And hey, maybe IBWO kents are slightly different on cloudy/rainy days than on bright/sunny days, or different when mates or juveniles are nearby rather than distant, or different when they're very hungry versus chock-full, or, or, or.... the potential variables in field biology are enormous.


And what IF the 587 figure is REAL?… do we now just ignore or toss out recordings of kents starting at say 660 and above, even if they accompany sightings, because surely they must be bogus? Or do IBWOs have an entire range of kent sounds or regional 'dialects', or are they morphologically-constrained to 587? 

Just scanning over the limited data John presents, what strikes me is the variability of the Hz presented -- could be just different bird genders, ages, circumstances, or other factors, but the point is it lends me little confidence in what the 587 "average" even really means -- and no descriptive statistics are run on those values to give a better clue to that variability (eyeballing it, it looks like a possible range from roughly ~560 to 610?) in his cluster of interest -- just a simplistic histogram intended to imply some significant result. 

To some extent John's emphasis is really not on the 587 figure but on the harmonic structure of the kents. What is far more interesting to me is the clear discrepancy though between the lower Hz values of the Florida and Big Woods, AR. birds -- which may not be so easily explained away (though John attempts to, even after originally implying that fixed morphology alone will constrain the variability of kent calls). John thinks he has lumped together an array of putative Ivory-bill calls for this study, but in fact he may have essentially lumped together apples and oranges (while perhaps deliberately leaving out some key ones). It's just hard to know for sure. He thinks he's building a logical argument, but the underlying assumptions make it more like a shaky house of cards.


In the Ivory-billed debate I don't expect the level of quantification, precision, or critical thinking found in the work of Jim Simons.... but I do expect better than what is often served up these days.

The more and more and more sounds that are claimed to be from Ivory-bills over time, in more and more places, the more outlandish it appears to critics that we can not find a single bird to clearly (not beautifully, just clearly) photograph; can not find a single active roosthole, nesthole, or foraging site, for a bird that keeps eating, sleeping, perching, day after day after day (not to mention kenting and double-knocking), and breeding year after year after year. And so it goes (there are some possible explanations for all this, but I do understand the frustration of skeptics).


IF the Ivory-billed Woodpecker is extinct, the prolonged saga of this bird is an example of poorly-done science…. but just as true, if the Ivory-billed Woodpecker does exist, its seemingly never-ending narrative offers ongoing examples of similarly-weak science (...with that said I will look forward to any future evidence and drone footage from the Latta group in La. -- Steve Latta speaks to a gathering later this month and perhaps will have an update from any winter efforts).


Finally, frankly, for the moment, we barely need more research (it's reached a stage of being idle, speculative chatter by now) on the Ivory-billed Woodpecker, on its behavior, its biology, its habitat, its diet or voice or wing-flap rate…. we need good, detailed sightings and photos/video… we already have the equipment, the potential manpower and abilities to get that, but yeah, it ain’t easy, nor cheap! —  (might even require a large team of dozens of searchers -- I tire of hearing from the solo wannabes with their plans/chutzpah to go to location XYZ for a few days and accomplish what no one has done in 80 years)… and worse, the scoffing and mockery that now shadow anyone thinking of attempting the task, discourages most from even mounting an effort. So here we be. If Cornell, with all their background, experience, equipment, knowledge, couldn't pull it off, how do we expect any John Q. Public to do it, except by fortuitous, freaky chance.

So I'll let Jim Simons have the last word, for as much of a nerdy, hard-working, successful quant as he was, he nonetheless noted simply, “Luck plays a meaningful role in everyone’s lives”…. in the Ivory-bill arena what we badly need right now is not more research but more luck!


————————————————


Thursday, May 09, 2024

-- Summer Ahead --

 —————----——————————

Not sure there will be much significant IBWO news through the summer 😦 so, in between yawns, may just start posting a few other things I find interesting, entertaining, or otherwise worthwhile. 

....beginning with some references on ‘logical fallacies’ and ‘cognitive biases’:


https://www.grammarly.com/blog/logical-fallacies/

https://en.wikipedia.org/wiki/List_of_fallacies

https://www.scribbr.com/research-bias/cognitive-bias/

https://en.wikipedia.org/wiki/List_of_cognitive_biases


—————————----——————



Thursday, May 02, 2024

— Help With A Research Study Requested (not specific to IBWO) —

————————————-----------—


A Canadian reader, birder, and neuroscientist (University of Toronto) writes me to ask for help with their current research study of the effects of birding skills on the brain. The study is online here (requiring ~15+ mins. of time, and “open to all” of any background or experience):

https://birdingstudies.com

The writer notes, in part, “this study is part of our wider project of connecting birding and citizen science activities with research on cognitive health and brain function. At the population level, we're exploring how trends in species prevalence (e.g. as quantified via eBird) correlate with geographic trends in how people process birds. At the smaller scale, we have a line of neuroimaging research looking at beneficial changes to brain structure and function that result from decades spent learning about birds.”

Please participate if you have the time (a raffle for binoculars is included for participants).

—————————————--------——