news_2016-11

On 2016-11-24 I began an effort to learn to enjoy news reports. I'll be updating this post throughout the remainder of November with whatever stories I judge to be something other than ... gosh, I don't know a word that's hateful enough to describe most news reports. Epistemic poison? I probably won't be linking the original articles, because the original articles are seriously not worth sharing beyond the summaries that I'm providing here. Have I mentioned that I do not yet like news reports?

The Economist: Syrian government bombs rebels.
CNN: Japan and South Korea are now sharing military intelligence.
Bloomberg: The Philippine economy is doing poorly.
Bloomberg: In an effort to account for untaxed black market money, the Indian Government demonitizes its largest bank notes and issues a new series of bank notes which can be exchanged for the old ones.

The Guardian: Hurricanes continue to exist in the tropics.
The Guardian: There's a big fire in Haifa, Israel's third-largest city.
The Guardian: Peace negotiations between the Colombian government and FARC, a revolutionary socialist guerilla force, which have been in progress since 2012 are still in progress.
The Guardian: Thousands of Rohingya Muslim refugees fleeing oppressive social conditions in Myanmar since 2015 continue to exist.
NY Times: People continue to kill and die in the Iraqi civil war.

That's it so far. Maybe I should make a list of all the sites that I looked at in order to show how very many fail to host a single story of greater merit than "A person said that they wouldn't like it if a thing happened that wasn't in their interest".

Questions, declarations, demands

Questions are approximately statements of ignorance, but they're better at getting people to respond. Why? Consider:

S1. Declaration of ignorance
Alani: Oh no, here come's Hukov. He's such a prick.
Hukov: Hi, Alani. I don't know how the world would look if participatory consent were a socially overvalued moral construct. Idk if you know either.
Alani: Hukov, I don't care about you or what you know. Go away.

S2. Inquiry
Alani: Oh no, here comes Hukov. He's such a prick.
Hukov: Hi, Alani. What if Jesus were a raisin?
Alani: I ....hm....what if Jesus were a raisin? Let's talk this out.
...

So maybe one reason that questions are more effective than declarations of ignorance in getting an informative response is that questions, by skipping the mention of the speaker and their mental state, give the audience fewer opportunities to decide they're not interested.

Another reason is that questions are kind of like demands for answers, and people have a quick mental reaction that promotes compliance with demands.

If a demand is roughly a statement of the speaker's desire for the audience to perform a behavior, then maybe a question also encodes a little bit of a preference for the audience to respond, and a little bit of an assumption by the speaker that they have social authority to voice their preference with a reasonable expectation that the audience will comply.

Neat.

Habits, Perceived Affordances, and Interrupted Causal Contingencies

D asked me to fix the toilet. The chain that allows the handle outside to tank to raise the flapper within the tank (the toilet's epiglottis) had become detached, and she couldn't figure out how to reattach it. I sat down to examine the internals of toilet tank, closed the valve on the water supply line, and placed my hands into the tank water.

"Eeh! Gosh, that's cold!" I exclaimed.

"Well," D said, "you already thought to close the valve on the water supply line. Why don't you flush the remaining water out of the toilet tank?"

Right! Yes! That's what I ...almost did? Why didn't I do that? I depressed the lever handle, ...

And the toilet didn't flush because the chain that allows the lever handle outside the tank to raise the flapper had become detached. That's the thing that I was trying to fix, remember? Pretty funny.

Or, no, probably not funny, but maybe it's the sort of thing I could write a post about? Yes! Done.

Of Confidence, Understanding, and Conclusions

Confusion and understanding are important concepts in rationality and metaethics. The concepts of conclusions and resolved beliefs are similarly important. These concepts appear in the written works of our culture much as they appear in our daily language: without precise definition or explicit connection to the mathematics of probability theory. One may wonder, what is the connection, if any, between epistemic confidence and understanding, or between confidence and resolved belief? Below I will outline some guesses.

What if like:
You're chatting with a peer. They ask a question and you pause for just a beat before replying. I propose that the condition of your mind during that pause is nearly the same as the mental condition of confusion. The difference is that the condition of a mental pause is almost imperceptively brief, while confusion persists for noticeable and even uncomfortable spans of time. Confusion is a condition where the mind is generating and discarding ideas, having not yet settled on an idea which is good enough to justify ending the search. Confusion doesn't have to be preceded by a surprise, and not all uncertainty is accompanied by confusion. Where in a Bayesian mind do we place confusion? We place it not in uncertainty, and not in surprise, but in the failures of attempted retrieval and production of ideas, theories, strategies. On the time scale of milliseconds, these failures are mental pauses. On the time scale of moments and minutes and more, these failures are called confusion.

So much as the subjective feeling of confusion comes from noticing one's own failure at cognitive production, so contrariwise the feeling of understanding comes from noticing a productive fluency; noticing that one is quickly able to generate ideas, hypotheses, strategies in a domain, and to feel confident that the produced cognitive contents are valuable, and that the value justifies terminating a mental procedure of search or construction.

From the idea of terminating a mental train of thought, we naturally come to the concept of conclusions. I once wrote, "Our beliefs gain the status of conclusions when further contemplation of them is no longer valuable." There are many reasons why you might disvalue further contemplation of an idea! You could have other things which you prioritize thinking about! You could expect that there are no further reasons which could change your belief! You could notice a limit on your reasoning, like an inability to quickly find reasons which would change your belief, while still accepting that such reasons might exist! There could be unpleasant environmental consequences of continuing to contemplate a topic, like if someone didn't want you to think about the thing and they could tell when you were doing so!

So conclusions are merely those beliefs that you maintain when a train of thought terminates, and which you're likely to recall and make future use of. Resolved beliefs are different! I know that "resolved belief" isn't a standard term in the rhetoric of rationality or metaethics, but maybe it should be. Resolved belief is the great promise of maturity! A resolved belief is a conclusion where you have a reasonable expectation that no further reasons exist that would change your mind. The conservation of expected evidence means that something has gone wrong in your mind if you expect that your belief will change some specific way after seeing some unknown evidence, but it is still entirely rational for you to expect that your beliefs will change in some unknown direction after seeing unkown evidence. If you expect that your beliefs will change a lot (in some unknown direction!) that just means you're about to see strong evidence. One way to phrase this situation is to say that the question whose truth value you are contemplating is about to be strongly tested.

I'm not totally sure that updating on lots of bits of evidence is the same as reaching a resolved belief. It seems like a different thing to say, "this issue is probably settled hereafter" than to say, "our inquiry changed our beliefs drastically". If the latter is to say that an idea has been tested, then the former is to say that the idea perhaps can be tested no further: that there is no test the observation of whose result will cause us to update our beliefs. The relationship between these two sounds like something that mathematicians have probably already figured out, either in analyzing converge of beliefs in learning algorithms, or perhaps in logic or metamathematics, where settled conclusions, once tested now untestable, are very salient. Unfortunately, I'm too stupid for real math. Some quantities that seem relevant when making a judgement of whether you've dealt with most of the important possibilities are 1) the conceptual thoroughness of your investigations, per genus et differentiam, 2) the length of time that you've spent feeling confident about the topic and similar topics, and maybe 3) the track record of other people who thought they had investigated thoroughly or who thought that the issues were settled. For some issues, maybe you can brute force an answer by looking at how your beliefs change in subjectively-possible continuations of your universe? That's probably not a topic about which I can think productively.

On the topic of other maybe-relevant math that I'm too stupid to understand, I'm reminded of the idea of the Markov blanket of a variable in a probabilistic causal model. Within the pretense of validity of a given causal model, if you condition your belief about a variable X on the states of a certain set of nearby variables, the Markov blanket of X, then your belief about X shouldn't change after conditioning on the states of any variables outside of the blanket. Unlike in the case of beliefs that are settled by logical proof, I don't think it's the case that the untestable belief in X after having conditioned on its Markov blanket will necessarily be confident. Possibly that's not the case with proof either when you start talking about undecidability and self-reference. I guess the best question my curiosity can pose right now about resolvedness of belief is whether convergence of belief in probabilistic logical induction is related to an independence result like which obtains when you condition on the Markov blanket of a causal variable.

Learning rate adaptation is another math thing in the semantic neighborhood of resolved belief, but it's not a thing Bayesians often talk about, from what I've seen.

But so then:
The feelings of confusion and understanding both accompany our judgements about our individual abilities to quickly produce valuable cognitive contents in a conceptual domain. In familiar domains, we find that some beliefs and valuations, "conclusions", are reliably useful after being reproduced - retrieved or rederived. Confusion and understanding of a deeper kind pertain to a specific kind of conclusion that I've called a resolved belief, somewhat neglecting the notion of resolved valuations. In my terminology, we say that a belief is resolved to the extent that we expect that there are no facts we could learn which would change our minds about the belief. As I have introduced the notion, probability theory does not require that a belief which no longer seems testable need be confident. Still, I have a strong intuition that resolved beliefs and valuations should generally be and will generally be confident, in just the same manner that I become more confident that I know the contours of a face after doing a portrait study. While I think that resolved beliefs will generally be confident, contrarily, it's abundantly clear that one's confidence in a belief or valuation alone cannot justify an expectation that the thing won't change in the future. I don't yet know what theoretically justifies a judgement that an issue is settled, but maybe it has something to do with belief convergence in the mathematics of learning. The concept of metaethically resolved valuation might then be viewed as a variety of resolved belief in the domain of self-characterization. I hope that other philosophical questions can also be translated into terms where the mathematics of learning is clearly relevant, so that philosophical questions can potentially be resolved just as much as can computational ones or physical empirical ones. Resolved belief is the great promise of maturity, and it's high time that we figured out what it means.

Addendum: Hrm, the papers I've seen so far about posterior convergence make it seem like statisticians care about such things for pretty different reasons than I do. The usual situation presented in the literature on convergence of parameter estimates is like, "You get a sequence of percepts of the thing, and the noise has a certain shape. Depending on the shape, we can show how fast your guesses become better." Adapting that to be relevant to confusion, I'd want to frame attempts at cognitive production as noisy percepts of something. That's not the worst idea, but it doesn't seem to capture most of my idea of the source of failures of cognitive production. Maybe the literature on decreasing KL divergence in model selection will help to clear things up.

*distant voice*: Aren't you forgetting about Solomonoff induction?

Oh. Yeah. That's probably the topic I should review first.

My Lives and Deaths in a Big World

Among the describable moments where I remember this moment as the last one that I experienced, maybe a lot of them involve dying as my thermal fluctuation of a brain, filled with hallucination, returns to non-existence.

That situation isn't one I normally consider when going about my life, and so the question arises of whether my behavior is adapted to it.

Maybe it make reflective sense, and not just intuitive sense, to not care much about the observer moments in the lives of Boltzmann brains, because one's decisions as instantiated in Boltzmann brains won't have much of any effect on those brains' surrounding environments, given that the rest of one's supposed body probably won't exist there to be actuated, and even if one's bodies do exist, one would still be effectively swatting in the dark at imagined flies.

Taking this idea a little further, we find the suggestion that we should perhaps generally care less about incarnations of our bodies which are more disconnected from their surroundings, those being less perceptive or less powerful to effect changes. Consider however that this is surely not the principle which informs our intuitive expectations of worldly permanence and our intuitive lack of concern for observer moments that arise in thermal fluctuations: if so, we'd also disbelieve that we could be blinded or demented or paralyzed.

In locales where brains persist long enough to effect biological reproduction, it's little surprise that brains evolve with expectations of sensory persistence. If we endorse the value of our intuitions of sensory persistence, then perhaps the process of evolution which endowed us with those intuitions can also provide us with principles for forming reasoned beliefs regarding how to act in a universe or multiverse large enough to contain many incarnations of our minds in different locales.

"Act as though only those future moments are real in which you might reproduce" doesn't sound like wisdom. What other possible lessons could we abstract from evolution? "Act to achieve good states in worlds where you can do so"? That might  be a principle which prescribed avoiding paralysis in EEA, while excluding thermal weirdness. Although it sounds much less evolution-y than the first one.

Complementary to the topic of intuitive dis-belief in fluctuation-death is our intuitive actual-belief in total death - i.e. in the complete cessations of our subject experience, despite arguments suggesting subjective immortality of minds in a big world. And yet the explicit reasons for discounting the decision theoretic value of Boltzmann moments (of seemingly extraordinary death) listed above do not seem to provide complementary reasons to discount extraordinary survival scenarios.

And so once again I am left wondering whether my evolved intuitions are ill suited to thriving in a big world.

We've all heard the quantum immortality thought experiment, and we've mostly all found some reason to not commit quantum suicide. Beyond this, I know almost no sources of relevant advice.

Of course Robin Hanson wrote How To Live In A Simulation, and Eliezer Yudkowsky once made the intriguing suggestion that one should open one's eyes to decrease world simulation measure when bad things happen (and to close one's eyes to reduce simulation load when good things happen). This advice is in the right weirdness neighborhood, but not obviously relevant to taking Boltzmann brains or subjective immortality seriously.

Cultural Relays

This post is sort of a response to “Don’t Be a Relay in the Network”, but I've tried to make it self-contained.

Some websites have strong information currents; there are lots of posts, and new ones are constantly pushing older ones out from their place of prioritized visibility on the top of the stack, down toward the archives of darkness. The archives are so dark that the stack is effectively a pipe, with old posts being forgotten entirely, at least from the memories of site users. 

New, highly visible posts are the best place for site users to write comments if they're looking to socialize, and so the speed that new posts get buried under even newer posts is a strong determiner of how long people will stay in a comment thread before leaving for newer content and larger crowds. When the incentives for conversation are particularly bad on a streaming media channel, people move away from commenting at all and toward a media consumption strategy that consists of only snap-judgements: liking, disliking, sharing, following, and blocking. In short, click and click and do not stop. 

Like watching televised shows, this clicky strategy allows users to stay abreast of topical content, and like watching televised shows, this clicky strategy can not support a thriving culture where people interact with each other or make things of value. Even in less extreme cases which do support socialization, a stronger information current makes for a culture with a shorter memory, where ideas and references have a briefer shelf life of cultural relevance.

The original essay suggests that this state of affairs is poor for the prosperity and mental health of site users, because they lose their personal significance in the community as creators when they only passively consume media produced by large-scale external forces or thoughtless group dynamics. Users of websites with strong information currents are attuning themselves to the dominant rhythms of an impersonal culture, says the author, and each user becomes less of a person and more of a mere relay in the network, doing little but passing and blocking signals.

The prosperity of mental life is a fine thing. I'd like to list some additional reasons to resist becoming a relay in the network. Firstly, some information merits, for one’s own use, a deeper understanding, produced by long, deliberative research and contemplation. If you become a Relay, you will fail yourself. A person must have a long memory to learn some things of great value.

Another reason that we should invest more time in valuing and responding to content has a universalizing character. The author mentions a “clever, fictive metaphor bandied about by pseudo-mystical techno-utopians”, that a society is a mind and its members are as neurons. I, being such a techno-utopian, am of course putting this metaphor to use. The third reason is this: there are things which can not be achieved by cultures with short memories and fleeting interests. If we imagine setting the ratio of time that members of a community will devote to gaining broad informedness versus deep understanding, we see that the short memory extreme, with its vanishing and exploding feedback gradients, has few and shoddy basins of attraction.

Like the narrative distorted into myth, like the speech distorted into chants and slogans and soundbites, like the joke distorted into habitual references, like the interests distorted into stereotypes, and like the motive appeals distorted into click-bait, the products of short memory culture cannot sustain context or nuance. These are washed away by the current of topicality, and much value goes with them, down into the dark archives. 

So, it is good to resist becoming a relay in the network. If you want to relay some signals, fine, but then also go spelunking in the dark. Write about old works. Reinforce others who try to connect disparate fields and those who show that they've engaged with their ideas at length. Work on long term projects. Ultimately, build a creative culture with a long memory, because that is necessarily the domain of complex human flourishing.

Capitalized Abstract Nouns

On Facebook, Marcello Herreshoff posted a list of Capitalized Abstract Nouns that people believe in. Marcello provided it as a resource for an exercise to help individuals explicate their own preferences, but that exercise is not my main interest here. My aim to form a taxonomy of things that people believe in, working off of Marcello's list.

-

Conditions: People believe in preferred conditions of individual existence such as {innocence, dignity, strength, health, and freedom} and conditions of collective existence, such as {equity, order, accountability, and justice}. Also, people believe in beauty, which might be a preferred condition of reality, independent of the existence of people. Asserting a belief in these is perhaps asserting that the audience should also value the conditions.

Standards: People believe in behavioral standards or procedures (both individual & collective) such as {decency, fairness, honesty, efficiency, cleanliness, discipline, faith, and patience}. Statements of belief in these are partly assertions of their terminal value, and partly assertions of their instrumental efficacy in obtaining good conditions.

Agents: People believe in particular agents (individual & collective) such as {god, nature, human society, America, the American military, the global market, Whatever Inc, Senator Whomever, and themselves}. Belief in these is mostly about the ability of the agent to obtain good conditions, and partly about the ethical behavior of the agent, and a little bit about the terminal value of the agent's existence.

People believe in non-particular agents also, such as {family, friends, volunteers, patrons, citizens, stewards, and leaders}. Saying "I believe in family" means something very close to "I believe in the value of the relational behavioral standard of kinship," but I've decided to make this a separate category from believing in the other behavioral standards like decency, fairness, and honesty.

Projects: People believe in projects, movements, and social systems, such as {the reformation, open source software, democracy, feudalism, the war on drugs, and miscegenation} Belief in these is a lot like belief in particular agents.

People believe in bodies of knowledge and procedure, such as {education, medicine, tradition, law, and functional programming}. These are more like the behavioral standards.

Dunno: The last one I really care about categorizing is "The Future". Is that like believing in a particular narrative, as in "I believe in the virgin birth of Christ"? Or is the future conceived of as an agent bringing about desirable conditions? Or is the future thought of as desirable condition itself, like beauty? Or is the future a non-particular collective project, like believing in revolution? Maybe it's the normal kind of belief, and people are just saying that the future exists. Probably not that one, but I don't know.

-

SO THEN: People believe in good things, good lives, good conduct, good people and institutions and social roles, good projects, and maybe things in whatever category The Future belongs to. Riveting.

Pragmatic evidential semantic speech attitudes or something

There are more possible relationships between the content of your speech and your meaning than "serious" and "sarcastic".

  1. I believe what I'm saying.
  2. I kind of believe what I'm saying.
  3. I said a thing that I believe in an unusual way so that you would understand how sincerely I believe it.
  4. I would be surprised if what I was saying was mostly right, but we have to start with something if we are to compose a productive theory.
  5. Some part of me believes what I'm saying, but I don't endorse that.
  6. Some part of me doesn't believe what I'm saying, but I don't endorse that.
  7. I believed what I was saying as I was saying it, but then I totally remembered or figured out that it was all wrong.
  8. I don't really mean any of what I'm saying, but I'm saying it because of some impulse.
  9. I don't strongly believe what I'm saying, but I'm still saying it because it's (funny | hurtful | interesting | a prank, mate | shit-posting | so random hahaha | expected of me | habitual | what someone else said | useful | phatic | ... ).
  10. What I'm saying is literally true, but I don't mean all of the connotations that you're reading from it.
  11. What I'm saying is literally false, but I meant the connotations that you're reading from it.
  12. I don't know how to say a thing that I would mean, but what I'm saying gets close to the issue.
  13. I know what I would like to say, but I'm saying something slightly wrong for sake of brevity.
  14. I know what I would like to say, but I'm saying it with lots of indirection because you wouldn't respond well.
  15. I'm not resolved yet as to whether I believe what I'm saying, but I will believe if someone finds a laudable reading of it or if lots of people seem to like it.
  16. The state of affairs became true because I declared it.
  17. What I'm saying is unrelated to my beliefs.
  18. I thought I didn't believe the thing as I was saying it, but I have now been convinced by hearing my articulate argument aloud.
  19. I believe what I'm saying, but not quite so much that I want to defend it while some asshole on twitter picks it apart.
  20. I meant multiple readings of what I said, to different degrees toward different audiences, but I wouldn't normally try to simultaneously independently steer the beliefs of multiple audiences unless I were showing off to yet another audience.
  21. I said the thing precisely because I thought you wouldn't know what it meant or if it meant something.
  22. What I'm saying doesn't mean anything.
  23. I meant the opposite of what I said, you butt.

Candidate Moral Imperatives

  1. Do good. Achieve good. Maximize good.
  2. Improve things, especially conditions for people and moral patients. Don't worsen things. Optimize everything. Depessimize everything.
  3. Do wondrous deeds.
  4. Be good effectively. Waste no motions. With every move, become closer to achieving the goal.
  5. Make progress. Invest effort where gains will accrue.
  6. Do the things you think you should be doing.
  7. Do the least total harm. Prevent large harms.
  8. Prevent bad and insufficient good. Prioritize preventing frequent and severe bad things.
  9. Keep bad things small and infrequent.
  10. Enable the least harm to triumph over greater harms.
  11. Signal your moral character more effectively.
  12. Summon good.
  13. Avoid paths leading into hells.
  14. Party on responsibly.
  15. Create good. Create more and better good.
  16. Cause the change you wish to see in the world.
  17. Reduce the expected magnitude of potential harms.
  18. Abstain from vice.
  19. Be cautious in ever doing things that you consider sinful or which others widely consider sinful.
  20. Don't just not do the thing, but leave a safe margin around doing the thing.
  21. Preserve good. Destroy bad.
  22. Act so as to best serve universal well-being.
  23. Hack at the sources of bad.
  24. Be good. Be good better. Be good ambitiously. Be virtuous. Be great. Become better.
  25. Be so fucking rad.
  26. Build good character.
  27. Become an ideal. Do god's job. Become a savior.
  28. Don't become a grotesque parody of yourself.
  29. Become a superlative example of righteousness.
  30. Don't be a dick.
  31. Attach the self to meaning.
  32. Continually strike envy in the hearts of men with the weight of your philosophical authority on righteousness.
  33. Be reasonable, rational, prudent, wise.
  34. Become the person.
  35. Become the forerunner.
  36. Find ways to become better.
  37. Be less than the regular amount of terrible for a human.
  38. Become proof that greater unimagined goods are possible.
  39. Be just. Alter the distribution of benefits and burdens across people to be closer in accordance with the merits of those people? Be reputable. Keep promises? Be responsible. Make amends if you previously worsened things? Be scrupulous.
  40. Be nice. Be excellent to each other. Be charitable.
  41. Learn to want to follow whatever the correct standards are.
  42. Learn skills.
  43. Preserve parts of yourself which may be good.
  44. Hold yourself to a higher standard.
  45. Become good enough to justify the continued existence of the places you inhabit.
  46. Don't continue becoming not good, if that's what you're doing.
  47. Become terrifyingly formidable.
  48. Become a person who seems to hail from a culture with greater moral and philosophical authority.
  49. Become better, even knowing that better might not be good enough.
  50. If you're an ok person, and you want to improve in some ways, and other people would want you to improve in those ways, become that person.
  51. Become a person whose words show that you bear a great wealth of careful reasoning and relevant practice.
  52. Cultivate a hunger to improve.
  53. Commit to doing good. Pledge yourself to good. Comply with your past declarations of intent to do good.
  54. Regain the ability to respond to the meaning of things.
  55. Try to try.
  56. Live in your name.
  57. Adopt important responsibilities.
  58. Cultivate an intrusive conscience.
  59. Set your heart on a pilgrimage.
  60. Maintain a desire to improve.
  61. Do good relentlessly.
  62. Don't look for reasons to do bad.
  63. Act as though you care strongly about things that affect lots of people, even when you don't.
  64. Occasionally cultivate a motivated disposition insensitive to goal feasibility.
  65. Reinforce good. 
  66. Help people to be good. Show gratitude when another person improves conditions for you or yours.
  67. Coordinate with good. Affiliate with causes that are trying to do good.
  68. Promote participation on solving important problems. Prioritize important topics that are hard to have opinions on.
  69. Become a rallying point for good.
  70. Create places for people to do good. Build reputable organizations for achieving good.
  71. Seek and amplify the good processes that created the good parts of you.
  72. Find and amplify the process that generates entities that heed meta-level advice such as this.
  73. Evaluate goodness anew.

  74. Go meta first.
  75. Seek whence.
  76. Act in a way that greater commands your own endorsement.
  77. Don’t do things that prevent people from offering you good opportunities.
  78. Don’t do things that spoil people's ability to reach the Pareto frontier.
  79. If you should simply do well what you were meant to do, only figure this out by being meant to figure well.
  80. Do correctly whatever you have been (self-)created to do.
  81. Do things for which you and members of your outgroup would both want to be remembered.
  82. Do the things which a better version of you wishes you would do.
  83. Act by universalizable principles.
  84. Act by principles which everyone can rationally will that you act by.
  85. Act by principles which no one can not reasonably reject that everyone act by.
  86. First take the plank out of your own eye.
  87. Keep your identity small.
  88. Make no graven image capable of suffering.
  89. Reduce suffering generally.
  90. Reduce existential risk for anything which should exist.
  91. Find new ways to quantitatively estimate how good some possible courses of action are.
  92. Prevent the heat death of the universe. Avoid the loss of resources generally.
  93. Enter situations where you can act decisively.
  94. Do not boil a young goat in its mother's milk.
  95. Write fresh scripture. 
  96. Stay out of your own way.
  97. Don't shoot yourself in the foot. 
  98. Eat mostly plants.
  99. Don't believe everything you hallucinate.
  100. Help people to analyze the failures of their self-analysis.
  101. Try to identify what things might pave a slippery, sloping road to hell.
  102. Harvest baboon organs.
  103. Analyze incentives producing coordination problems.
  104. Over-chlorinate not thy neighbor's pool.
  105. Cast no aspersions on ants.
  106. Love and serve one another.
  107. Be the aliens you wish would contact the world.
  108. Become the friend you want to talk to.
  109. Become someone who lives on their own terms. Become the creator of products you love. Become the mentor you never had. Become the child you wish you had. Become the parent you wanted to grow up with. Become the spouse you want to be with.
  110. Have fun. Do not hurt people. Do not accept defeat. Strive to be happy.
  111. Attain knowledge of Causes and secret motions of things. Enlarge the bounds of Human Empire to the effecting of all things possible.
  112. Be a simple kind of man. Be something you love and understand.