Polynesian Testicles and American Turkeys: The Ways of Knowing We Can’t Explain
Anecdote 1:
During the Thanksgiving episode of This American Life, they aired an old segment with the chef and food writer Vertamae Grosvenor, where she explained that you can tell when a turkey is properly cooked purely by the sound of the grease sizzling. They tested this claim by playing the grease-sounds of one turkey from about 5 different points in its journey to the table, and asking her how done the turkey was at each point.
She did O.K. -- definitely better than chance, but not perfect, and mentioned that it was harder than she expected.
Anecdote 2:
I was reading about how the ancient Polynesians were incredible navigators, reliably finding small islands across thousands of miles of open ocean without any modern equipment. One of the methods that these navigators used to figure out where they were and where they were headed involved dipping their testicles in the water to get a better sense for the temperature and the current.
Synthesis:
Much has been made about the fact that we often can’t know how modern neural-nets come to the conclusions that they do. This has particularly been an issue of ethical importance in medicine (what’s more important, the diagnosis, or the explanation of the diagnosis?). What we acknowledge less is our own inability to reliably explain our thinking. What the hell does a Polynesian navigator learn by dipping his testicles in the ocean? It would probably be really hard to explain! I would even argue that he doesn’t fully know! It doesn’t tell him exactly what the current is doing, or what swell is coming, but somehow, when combined with all of his other sensory information, provides him with a good sense of where he is and where he’s heading.
The radio segment of Vertamae Grosvenor is another good example of our mind’s opacity to itself. She was pretty confident that when cooking a turkey, the sound alone was providing her the information required to make a decision on when a turkey was done cooking, but in actuality, she learned that it probably wasn’t enough, that sound information was combining with other inputs (smell, appearance, time, who knows what else?) in ways she doesn’t have access to in order to provide the final insight.
My point is that I’d like to push back on the assumption that we have decent insight into the ways our minds work. When objecting to the fact that ML algorithms are black boxes, we have a tendency to ignore the more profound point that we ourselves are black boxes as well. Some of the reasons a doctor diagnoses a patient with one disease over another are expressible, obvious, but in many cases, some of the reasons the doctor’s mind came to that decision will be somewhat inexplicable, unspeakable, unknown even to the doctor.
I don’t mention this as a defense of machine learning, or an argument that we shouldn’t worry about how an algorithm comes to a decision. I’m not even sure I believe that machine learning algorithms offer an effective path to better understanding our own thinking. My point here, I think, is that in our generally scientific modes of questioning, which focus on isolating variables and explaining their impact, are often ineffective when we turn them on ourselves. We should even be cautious of the explanations our minds offer up for their decision making! These might be bluffs! Vertamae is the preeminent expert on her own thinking, and her mind told her that it was making its decisions based purely on the sound of the grease frying, this turned out to be incorrect! She thought she knew how she was thinking, but it wasn’t totally right.
My point:
It’s black boxes all the way down!