I. Grounding terms.
We say that an act is necessary for achieving of a goal when the ways of achieving the goal without performing the act are difficult, costly, or expected non-existent. When one's goal is to suppress certain intrusive thoughts associated with emotive evaluations (especially shame and sympathy), then cached strategies by which one has learned to suppress those thoughts might be called necessities or obligations of conscience. Obligations of conscience on a common topic, especially serving to promote the interests of another person or cause, might be called in collection a "natural duty" (to that person or cause) (in contrast with negotiated duties of contract). Specific natural duties and obligations of conscience are often associated with established social roles.
Asking if you have a natural duty of interest (hereafter, duty) or an obligation of conscience (hereafter, obligation) can be asking whether the interest is moral (respectively, whether the act is justified with respect to the interest, or with respect to morality). On the other hand, asking if you have a duty or obligation can also be asking whether you your mind should motivate you (w.r.t the cause or the act) by intrusive thoughts and pangs of conscience. It is a remarkable feature of cognitive executive function (or a remarkable delusion of introspection) that one can sometimes decide whether an emotion is useful in a context of experience, and that the emotion will then happen or not in that context accordingly. The purpose of this text is to begin identifying issues determining whether certain duties to investigate are useful, on which basis relevant pangs of conscience may arise or be extinguished.
Pangs of conscience, being intrusive to the normal course of our thoughts, and often serving to promote the interests of others, have much in common with demands of performance (or omission) made to us by other people. Thus a moral duty (cf. obligation) is a type of morally justified motive (cf. act) that people will often frame as compliance with the demands of the moral authority of conscience (or the demands of law, or of one's social role, or other sources of normative advice which people internalize by pangs of conscience). Isn't that a cool perspective? Far out.
I'm not going to stick strictly to the duty/obligation distinction, because humans don't really stick to a strict hierarchy of instrumental and terminal intents, and also because I suck.
II. Negligence, Liability, and, Standards of Preventative Care.
This section was motivated by my reading some tort law, which is why it's all about harm. There are probably often symmetric moral motives relating to the production of Good through means other than harm prevention (though these motives are a little different from duties in that understanding them might not result in pangs of conscience).
Often, we expect that magnitudes of expected future harms will decrease with preventative effort. When this is the case, we may have moral duties to invest effort into harm reduction. For example, you may have some of the following duties:
A) Duty to investigate which expected harms may occur in foreseeable futures: This duty is especially likely to apply to you if 1) you have a comparative advantage in reducing or preventing those harms, or if 2) others expect you to make such efforts as part of your assumed social roles, or if 3) you yourself may have enabled those harms through your unconsidered action (for example, by hiring and giving resources and colour of authority to someone who can not safely perform their job). A strongly related duty is to contemplate unintended consequences of acts that you are contemplating or committed to performing, including speech acts, and perhaps acts of cognition.
B) The Duty to investigate ways to reduce magnitudes of expected harm. This may include obligations such as a) identifying guiding principles of behaviour (such as ethical theories, or established "good practices" of a field or enterprise), or b) examining new applications of technology or academic theory to identified potential future harms (E.g. physical barriers to limit accidental contact with dangerous objects, social and economic theory to reduce undesirable market inefficiencies). Strongly associated is the duty to find acts which are forward compatible with your plans and priorities in different foreseeable futures (including ways to manage harms if you become incapacitated).
C) The duty to contemplate moral and normative principles: You may do this indirectly, by teasing out a naturalistic meta-ethics that accords with your moral intuitions in regard to the functioning of goal-directed agents in general, or directly, such as by a) inferring important concepts and evaluative principles that can be used to explain and interpret observed judgements of culture and conscience, (especially concepts which address present conceptual confusions) or b) extending these concepts and principles to judge unfamiliar cases as they arise or are foreseen, or c) reflectively applying aspects of your evaluative cognition, such as emotions and persuasive arguments, to understanding your evaluative cognition, in order to produce coherent preferences commanding your endorsement, or d) identifying cases where your behaviour does not accord with your endorsed values or their joint implications. Closely related is the duty to infer possible judgements and intents motivating another person's actions, and use them as moral evidence informing your own behaviour (E.g. before you remove Chesterton's fence), especially if the person has better information or better philosophical authority in issues of the domain where they're acting.
Those are some possible investigative duties relating to the prevention of harm. In the case where expected future harm reduction is an increasing function of invested effort, how much effort are we obligated to give?
One answer with deontological character might say, "If you are in a position to discover whether X is the case, and the case of X is grave matter, then you have a duty to investigate X."
A more utilitarian answer considers costs of preventative (investigative) effort vs. expected returns on harm reduction, and strikes a balance. A modification of this answer considers comparative advantages of different people in reducing a common notion of harm, and suggests a socially optimal joint assignment of preventative labour, maximizing the production of social welfare with respect to the common notion of harm. Another modification considers liabilities that society imposes on people for failing to meet certain "reasonable standards of care" with respect to harms of a specific domain. Someone interested in mechanism design might consider how these liabilities and standards of care are effectively formulated, and what outcomes they serve, and what moral consideration one should give to them (or to the outcomes that they would serve in some limit of strategic social coordination better representing society's "revealed preference").
One consideration society gives in the per-case formulation of standards of care is the culpability of the state of mind of the actor. As Larry Alexander writes in "Insufficient Concern: A Unified Conception of Criminal Culpability", "Even very tiny risk-impositions can be reckless if imposed for insufficient or misanthropic reasons, just as very large risk-impositions can be nonculpable if supported by weighty reasons." The decision whether to impose liabilities on an actor for recklessness or insufficient care (made by individuals operating under colour of society's authority to judge) will often depend ultimately on whether the actor's conduct "can be understood in terms that arouse sympathy in the ordinary citizen". Thus your own sympathy towards your self or toward others may be an effective tool for establishing personal standards of requisite care, in cases where harm prevention increases with invested effort.
III. Duties not to investigate.
There may be cases where the balance of invested effort, expected returns of utility from investigating, and the external imposition of liabilities may proscribe contemplation or investigation, in part or whole.
For example, I think some Islamic traditions, as part of maintaining the church's claim to moral authority, will proscribe laity from independent reasoning on moral issues (ijtihad) until the one has gained a title as a legal scholar.
As much as there are role-dystonic social thoughts, we expect that there are role-dystonic investigations (E.g., "Finding the most self-serving acts consistent with one's altruistic self image" as an investigative act inconsistent with an altruistic self image).
A more familiar example might be hyper-consciousness sometimes found in OCD, where intrusive thoughts are not easily suppressed, and one's brain consequently devotes excessive serial resources to analyzing some unproductive topic, like imaging harms befalling one's family members. In this case, one might be justified in using decision-bundling cognitive reappraisal to try to ignore a topic of contemplation entirely.
Other templates of hazardous information fit here. For example, you might not want to seek out information that would arouse the suspicion or wrath of powerful people.
I have two other examples here, not covered by the standard list of hazardous information templates, but I don't like to talk about them, because I think I might have a duty to not contemplate, and this duty might extend to other people.
IV. Conclusions, some of them related to the preceding text.
Do we have duties to investigate? Probably, a lot of the time, investigation and contemplation produce valuable results, and we are justified in investing investigative efforts at moderate personal cost, especially when we feel responsible for preventing possible harms, such as when others expect it of our social role or when we have comparative advantage in harm prevention. Should we be motivated in general by pangs of conscience? I don't know. Thoughts promoting attention to issues of high moral value makes a lot of sense as part of a mind architecture, but it also seems like our consciences could be more sensibly built. Given that we do have consciences, and that we sometimes don't do the right thing without their pangs, how much effort should our consciences demand of us in cases where production of Good increases with effort? Whatever level of effort evokes sympathy in the ordinary citizen, enough that they wouldn't impose liabilities on you for tortious reckless ignorance, the correct level is probably higher than that. Maybe we need a praiseworthy Schelling point, like "10% of your income". Finally, in cases where you expect that investigating or contemplating a topic would not lead to good consequences, maybe don't investigate. Pretty controversial stuff, I know.
No comments:
Post a Comment