Pragmatic Meanings of Tautologies

In this post, I fail to spend even twenty seconds searching google scholar for linguistics articles about tautologies. What a waste of time, recapitulating a trivial subject.

When people say "It is what it is," they're halting their negative affective valuations of the status quo, for lack of a foreseen exit ("I am what I am," and "Boys will be boys (regardless of your interventions)," have much the same character). There is a fair argument for the short-term instrumental value of this epistemic practice. An argument against the long-term rationality of using fatalistic arguments to trivialize the attentional import of present setbacks lies in the existence of slow learning rates affecting some instrumental motives and habits of behavior. Non-plastic motives and habits may become maladaptive in changed causal contexts, such as when an organism unlearns its natural agency to avoid aversive stimuli.

As with most issues of epistemic practice, this was put most eloquently by Steven Kaas:


Similar tautologies have similar fatalistic power to halt trains of thought: "What's done is done," seems to conflate our (usual) (in)ability to change the past with our ability to recognize the causes of a catastrophe and, armed with that understanding, to mitigate the potential re-occurrence of similar harms. "What will be will be," has some of the stopping-power of "It is what it is" to inhibit one's attempts to positively alter undesirable aspects of present courses of affairs, while also trivializing motives to seek information aiding prediction of future states.

There is another class of literal tautologies that people make common use of, exemplified by "Corn is corn." The pragmatic meaning of this utterance is that there is little variation across products-recognized-as-corn with respect to the functions one wishes to make of it, such that one quantity of corn can usually substitute for an equal quantity. Recognizing the fungibility of a good is, among other things, a way to trivialize the consideration one gives to the causal origin of the good (kind of like an assertion of path independence, saying your preferences are indifferent w.r.t. changes in the production of corn). If someone forcefully tells you that a class of things is itself, maybe it becomes prudent to investigate which variation in the attributes or origins of the things they are implicitly claiming you should be indifferent toward.

Here's another set of tautologies I think form a natural category; see if you can identify the commonalities of pragmatic meaning:  "It's not over till it's over," "Enough is enough," "I'll see you when I see you."

The literal-but-pragmatically-informative tautologies I most want to talk about, and the ones that motivated this post, are "No means No" and "Yes means Yes", common political slogans of feminism. (It's kind of wondrous to me, from a emotionally-detached primatologist perspective, that humans can effectively rally around slogans that are tautologies.)

"No means No," endorses a social maxim of taking people's statements of preference at face value, or of setting a low bar for judging when demands of omission of performance are reasonable and deserving of compliance in the domain of sexuality, in order to reduce the expected harms of sexual assault from present levels.

"Yes means Yes" has much the same content, but it emphasizes and prescribes adherence to a social maxim of presuming-by-default the non-consent of other people to involvement in sexual acts, absent their verbal utterance to the contrary.

My understanding of the slogan since I first heard it had been roughly that. But it was only today that I read the phrase "presumed non-consent", and was able to articulate its actual pragmatic message. Unfortunately, I read the phrase "presumed non-consent" in a discussion about markets trading in human organs, and not in (direct) connection with feminism. Maybe tautologies aren't the best possible marketing scheme for clear transmission of ideas.

-

Edit: ValueOfType on twitter mentions the case of "rules are rules", and attempts to give an explanation.

The first pragmatic interpretation that springs to my mind is this: There is some piece of advice whose value the audience is considering in their present situation. The speaker categorizes the advice as a rule - a category of advice whose central characteristic is that such advice is reliably and generally worth following, even in cases where the actor has salient doubts about the unconditional application of the advice. (Ideally the speaker judges the worth of the advice by a mature, informed extrapolation of the volition of the audience, but more realistically they will judge by their own preference (or by typical human preferences in cases where the speaker knows that their own preferences are non-representative)). The speaker, having made this categorization for themselves, does not consider the particulars of the situation facing the audience (since rules are applied unconditionally), and they pass without comment on the calculation justifying their categorization of the advice as a rule (which may not be persuasive given the salience of the situation facing the audience).

Instead of asserting, "This advice is a central example of a rule, meriting general compliance," they effectively assert, "Rules merit general compliance", and thus assume the categorization (and assume away issues of categorization) as part of the context or background assumptions of the conversation going forward.

(Framing a disagreement in this way, as depending on the audience's failure to consider an issue of common understanding, might make a good entry for an ArgumentTropes wiki, perhaps under the title Friendly Reminder Conceit).

This perspective unifies at least two of the previously analyzed classes of tautologies: "X is X," often means, "It is a firm assumption of mine that the case you are contemplating is a central example of the category X (with respect to my preferences and the uses I have for X), having the static and central characteristics of that category - which categorization does not merit further consideration of present particulars."

A Duty To Investigate?

Do we have a moral duty to investigate moral issues?  In what sense, if any, are we morally required to contemplate the consequences of our decisions, or the issues we prioritize? When are we obligated to determine particulars of our environment, such as the social norms that our culture enforces or the mental character of creatures that may sometimes deserve treatment as moral patients? Which unintuitive arguments or concepts, with potentially broad consequences for our decisions, must we give consideration?

I. Grounding terms.

We say that an act is necessary for achieving of a goal when the ways of achieving the goal without performing the act are difficult, costly,  or expected non-existent. When one's goal is to suppress certain intrusive thoughts associated with emotive evaluations (especially shame and sympathy), then cached strategies by which one has learned to suppress those thoughts might be called necessities or obligations of conscience. Obligations of conscience on a common topic, especially serving to promote the interests of another person or cause, might be called in collection a "natural duty" (to that person or cause)  (in contrast with negotiated duties of contract). Specific natural duties and obligations of conscience are often associated with established social roles.

Asking if you have a natural duty of interest (hereafter, duty) or an obligation of conscience (hereafter, obligation) can be asking whether the interest is moral (respectively, whether the act is justified with respect to the interest, or with respect to morality). On the other hand, asking if you have a duty or obligation can also be asking whether you your mind should motivate you (w.r.t the cause or the act) by intrusive thoughts and pangs of conscience. It is a remarkable feature of cognitive executive function (or a remarkable delusion of introspection) that one can sometimes decide whether an emotion is useful in a context of experience, and that the emotion will then happen or not in that context accordingly. The purpose of this text is to begin identifying issues determining whether certain duties to investigate are useful, on which basis relevant pangs of conscience may arise or be extinguished.

Pangs of conscience, being intrusive to the normal course of our thoughts, and often serving to promote the interests of others, have much in common with demands of performance (or omission) made to us by other people. Thus a moral duty (cf. obligation) is a type of morally justified motive (cf. act) that people will often frame as compliance with the demands of the moral authority of conscience (or the demands of law, or of one's social role, or other sources of normative advice which people internalize by pangs of conscience). Isn't that a cool perspective? Far out.

I'm not going to stick strictly to the duty/obligation distinction, because humans don't really stick to a strict hierarchy of instrumental and terminal intents, and also because I suck.

II. Negligence, Liability, and, Standards of Preventative Care.

This section was motivated by my reading some tort law, which is why it's all about harm. There are probably often symmetric moral motives relating to the production of Good through means other than harm prevention (though these motives are a little different from duties in that understanding them might not result in pangs of conscience). 

Often, we expect that magnitudes of expected future harms will decrease with preventative effort. When this is the case, we may have moral duties to invest effort into harm reduction.  For example, you may have some of the following duties:

A) Duty to investigate which expected harms may occur in foreseeable futures: This duty is especially likely to apply to you if 1) you have a comparative advantage in reducing or preventing those harms, or if 2) others expect you to make such efforts as part of your assumed social roles, or if 3) you yourself may have enabled those harms through your unconsidered action (for example, by hiring and giving resources and colour of authority to someone who can not safely perform their job). A strongly related duty is to contemplate unintended consequences of acts that you are contemplating or committed to performing, including speech acts, and perhaps acts of cognition.

B) The Duty to investigate ways to reduce magnitudes of expected harm. This may include obligations such as a) identifying guiding principles of behaviour (such as ethical theories, or established "good practices" of  a field or enterprise), or b) examining new applications of technology or academic theory to identified potential future harms (E.g. physical barriers to limit accidental contact with dangerous objects, social and economic theory to reduce undesirable market inefficiencies). Strongly associated is the duty to find acts which are forward compatible with your plans and priorities in different foreseeable futures (including ways to manage harms if you become incapacitated).

C) The duty to contemplate moral and normative principles: You may do this indirectly, by teasing out a naturalistic meta-ethics that accords with your moral intuitions in regard to the functioning of goal-directed agents in general, or directly, such as by a) inferring important concepts and evaluative principles that can be used to explain and interpret observed judgements of culture and conscience, (especially concepts which address present conceptual confusions) or b) extending these concepts and principles to judge unfamiliar cases as they arise or are foreseen, or c) reflectively applying aspects of your evaluative cognition, such as emotions and persuasive arguments, to understanding your evaluative cognition, in order to produce coherent preferences commanding your endorsement, or d) identifying cases where your behaviour does not accord with your endorsed values or their joint implications.  Closely related is the duty to infer possible judgements and intents motivating another person's actions, and use them as moral evidence informing your own behaviour (E.g. before you remove Chesterton's fence), especially if the person has better information or better philosophical authority in issues of the domain where they're acting.

Those are some possible investigative duties relating to the prevention of harm. In the case where expected future harm reduction is an increasing function of invested effort, how much effort are we obligated to give?

One answer with deontological character might say, "If you are in a position to discover whether X is the case, and the case of X is grave matter, then you have a duty to investigate X." 

A more utilitarian answer considers costs of preventative (investigative) effort vs. expected returns on harm reduction, and strikes a balance. A modification of this answer considers comparative advantages of different people in reducing a common notion of harm, and suggests a socially optimal joint assignment of preventative labour, maximizing the production of social welfare with respect to the common notion of harm. Another modification considers liabilities that society imposes on people for failing to meet certain "reasonable standards of care" with respect to harms of a specific domain. Someone interested in mechanism design might consider how these liabilities and standards of care are effectively formulated, and what outcomes they serve, and what moral consideration one should give to them (or to the outcomes that they would serve in some limit of strategic social coordination better representing society's "revealed preference"). 

One consideration society gives in the per-case formulation of standards of care is the culpability of the state of mind of the actor. As Larry Alexander writes in "Insufficient Concern: A Unified Conception of Criminal Culpability", "Even very tiny risk-impositions can be reckless if imposed for insufficient or misanthropic reasons, just as very large risk-impositions can be nonculpable if supported by weighty reasons." The decision whether to impose liabilities on an actor for recklessness or insufficient care (made by individuals operating under colour of society's authority to judge) will often depend ultimately on whether the actor's conduct "can be understood in terms that arouse sympathy in the ordinary citizen". Thus your own sympathy towards your self or toward others may be an effective tool for establishing personal standards of requisite care, in cases where harm prevention increases with invested effort.

III. Duties not to investigate.

There may be cases where the balance of invested effort, expected returns of utility from investigating, and the external imposition of liabilities may proscribe contemplation or investigation, in part or whole.

For example, I think some Islamic traditions, as part of maintaining the church's claim to moral authority, will proscribe laity from independent reasoning on moral issues (ijtihad) until the one has gained a title as a legal scholar.

As much as there are role-dystonic social thoughts, we expect that there are role-dystonic investigations (E.g., "Finding the most self-serving acts consistent with one's altruistic self image" as an investigative act inconsistent with an altruistic self image).

A more familiar example might be hyper-consciousness sometimes found in OCD, where intrusive thoughts are not easily suppressed, and one's brain consequently devotes excessive serial resources to analyzing some unproductive topic, like imaging harms befalling one's family members. In this case, one might be justified in using decision-bundling cognitive reappraisal to try to ignore a topic of contemplation entirely.

Other templates of hazardous information fit here. For example, you might not want to seek out information that would arouse the suspicion or wrath of powerful people.

I have two other examples here, not covered by the standard list of hazardous information templates, but I don't like to talk about them, because I think I might have a duty to not contemplate, and this duty might extend to other people.

IV. Conclusions, some of them related to the preceding text.
Do we have duties to investigate? Probably, a lot of the time, investigation and contemplation produce valuable results, and we are justified in investing investigative efforts at moderate personal cost, especially when we feel responsible for preventing possible harms, such as when others expect it of our social role or when we have comparative advantage in harm prevention. Should we be motivated in general by pangs of conscience? I don't know. Thoughts promoting attention to issues of high moral value makes a lot of sense as part of a mind architecture, but it also seems like our consciences could be more sensibly built. Given that we do have consciences, and that we sometimes don't do the right thing without their pangs, how much effort should our consciences demand of us in cases where production of Good increases with effort? Whatever level of effort evokes sympathy in the ordinary citizen, enough that they wouldn't impose liabilities on you for tortious reckless ignorance, the correct level is probably higher than that. Maybe we need a praiseworthy Schelling point, like "10% of your income". Finally, in cases where you expect that investigating or contemplating a topic would not lead to good consequences, maybe don't investigate.  Pretty controversial stuff, I know.