Habits, Perceived Affordances, and Interrupted Causal Contingencies

D asked me to fix the toilet. The chain that allows the handle outside to tank to raise the flapper within the tank (the toilet's epiglottis) had become detached, and she couldn't figure out how to reattach it. I sat down to examine the internals of toilet tank, closed the valve on the water supply line, and placed my hands into the tank water.

"Eeh! Gosh, that's cold!" I exclaimed.

"Well," D said, "you already thought to close the valve on the water supply line. Why don't you flush the remaining water out of the toilet tank?"

Right! Yes! That's what I ...almost did? Why didn't I do that? I depressed the lever handle, ...

And the toilet didn't flush because the chain that allows the lever handle outside the tank to raise the flapper had become detached. That's the thing that I was trying to fix, remember? Pretty funny.

Or, no, probably not funny, but maybe it's the sort of thing I could write a post about? Yes! Done.

Of Confidence, Understanding, and Conclusions

Confusion and understanding are important concepts in rationality and metaethics. The concepts of conclusions and resolved beliefs are similarly important. These concepts appear in the written works of our culture much as they appear in our daily language: without precise definition or explicit connection to the mathematics of probability theory. One may wonder, what is the connection, if any, between epistemic confidence and understanding, or between confidence and resolved belief? Below I will outline some guesses.

What if like:
You're chatting with a peer. They ask a question and you pause for just a beat before replying. I propose that the condition of your mind during that pause is nearly the same as the mental condition of confusion. The difference is that the condition of a mental pause is almost imperceptively brief, while confusion persists for noticeable and even uncomfortable spans of time. Confusion is a condition where the mind is generating and discarding ideas, having not yet settled on an idea which is good enough to justify ending the search. Confusion doesn't have to be preceded by a surprise, and not all uncertainty is accompanied by confusion. Where in a Bayesian mind do we place confusion? We place it not in uncertainty, and not in surprise, but in the failures of attempted retrieval and production of ideas, theories, strategies. On the time scale of milliseconds, these failures are mental pauses. On the time scale of moments and minutes and more, these failures are called confusion.

So much as the subjective feeling of confusion comes from noticing one's own failure at cognitive production, so contrariwise the feeling of understanding comes from noticing a productive fluency; noticing that one is quickly able to generate ideas, hypotheses, strategies in a domain, and to feel confident that the produced cognitive contents are valuable, and that the value justifies terminating a mental procedure of search or construction.

From the idea of terminating a mental train of thought, we naturally come to the concept of conclusions. I once wrote, "Our beliefs gain the status of conclusions when further contemplation of them is no longer valuable." There are many reasons why you might disvalue further contemplation of an idea! You could have other things which you prioritize thinking about! You could expect that there are no further reasons which could change your belief! You could notice a limit on your reasoning, like an inability to quickly find reasons which would change your belief, while still accepting that such reasons might exist! There could be unpleasant environmental consequences of continuing to contemplate a topic, like if someone didn't want you to think about the thing and they could tell when you were doing so!

So conclusions are merely those beliefs that you maintain when a train of thought terminates, and which you're likely to recall and make future use of. Resolved beliefs are different! I know that "resolved belief" isn't a standard term in the rhetoric of rationality or metaethics, but maybe it should be. Resolved belief is the great promise of maturity! A resolved belief is a conclusion where you have a reasonable expectation that no further reasons exist that would change your mind. The conservation of expected evidence means that something has gone wrong in your mind if you expect that your belief will change some specific way after seeing some unknown evidence, but it is still entirely rational for you to expect that your beliefs will change in some unknown direction after seeing unkown evidence. If you expect that your beliefs will change a lot (in some unknown direction!) that just means you're about to see strong evidence. One way to phrase this situation is to say that the question whose truth value you are contemplating is about to be strongly tested.

I'm not totally sure that updating on lots of bits of evidence is the same as reaching a resolved belief. It seems like a different thing to say, "this issue is probably settled hereafter" than to say, "our inquiry changed our beliefs drastically". If the latter is to say that an idea has been tested, then the former is to say that the idea perhaps can be tested no further: that there is no test the observation of whose result will cause us to update our beliefs. The relationship between these two sounds like something that mathematicians have probably already figured out, either in analyzing converge of beliefs in learning algorithms, or perhaps in logic or metamathematics, where settled conclusions, once tested now untestable, are very salient. Unfortunately, I'm too stupid for real math. Some quantities that seem relevant when making a judgement of whether you've dealt with most of the important possibilities are 1) the conceptual thoroughness of your investigations, per genus et differentiam, 2) the length of time that you've spent feeling confident about the topic and similar topics, and maybe 3) the track record of other people who thought they had investigated thoroughly or who thought that the issues were settled. For some issues, maybe you can brute force an answer by looking at how your beliefs change in subjectively-possible continuations of your universe? That's probably not a topic about which I can think productively.

On the topic of other maybe-relevant math that I'm too stupid to understand, I'm reminded of the idea of the Markov blanket of a variable in a probabilistic causal model. Within the pretense of validity of a given causal model, if you condition your belief about a variable X on the states of a certain set of nearby variables, the Markov blanket of X, then your belief about X shouldn't change after conditioning on the states of any variables outside of the blanket. Unlike in the case of beliefs that are settled by logical proof, I don't think it's the case that the untestable belief in X after having conditioned on its Markov blanket will necessarily be confident. Possibly that's not the case with proof either when you start talking about undecidability and self-reference. I guess the best question my curiosity can pose right now about resolvedness of belief is whether convergence of belief in probabilistic logical induction is related to an independence result like which obtains when you condition on the Markov blanket of a causal variable.

Learning rate adaptation is another math thing in the semantic neighborhood of resolved belief, but it's not a thing Bayesians often talk about, from what I've seen.

But so then:
The feelings of confusion and understanding both accompany our judgements about our individual abilities to quickly produce valuable cognitive contents in a conceptual domain. In familiar domains, we find that some beliefs and valuations, "conclusions", are reliably useful after being reproduced - retrieved or rederived. Confusion and understanding of a deeper kind pertain to a specific kind of conclusion that I've called a resolved belief, somewhat neglecting the notion of resolved valuations. In my terminology, we say that a belief is resolved to the extent that we expect that there are no facts we could learn which would change our minds about the belief. As I have introduced the notion, probability theory does not require that a belief which no longer seems testable need be confident. Still, I have a strong intuition that resolved beliefs and valuations should generally be and will generally be confident, in just the same manner that I become more confident that I know the contours of a face after doing a portrait study. While I think that resolved beliefs will generally be confident, contrarily, it's abundantly clear that one's confidence in a belief or valuation alone cannot justify an expectation that the thing won't change in the future. I don't yet know what theoretically justifies a judgement that an issue is settled, but maybe it has something to do with belief convergence in the mathematics of learning. The concept of metaethically resolved valuation might then be viewed as a variety of resolved belief in the domain of self-characterization. I hope that other philosophical questions can also be translated into terms where the mathematics of learning is clearly relevant, so that philosophical questions can potentially be resolved just as much as can computational ones or physical empirical ones. Resolved belief is the great promise of maturity, and it's high time that we figured out what it means.

Addendum: Hrm, the papers I've seen so far about posterior convergence make it seem like statisticians care about such things for pretty different reasons than I do. The usual situation presented in the literature on convergence of parameter estimates is like, "You get a sequence of percepts of the thing, and the noise has a certain shape. Depending on the shape, we can show how fast your guesses become better." Adapting that to be relevant to confusion, I'd want to frame attempts at cognitive production as noisy percepts of something. That's not the worst idea, but it doesn't seem to capture most of my idea of the source of failures of cognitive production. Maybe the literature on decreasing KL divergence in model selection will help to clear things up.

*distant voice*: Aren't you forgetting about Solomonoff induction?

Oh. Yeah. That's probably the topic I should review first.