That situation isn't one I normally consider when going about my life, and so the question arises of whether my behavior is adapted to it.
Maybe it make reflective sense, and not just intuitive sense, to not care much about the observer moments in the lives of Boltzmann brains, because one's decisions as instantiated in Boltzmann brains won't have much of any effect on those brains' surrounding environments, given that the rest of one's supposed body probably won't exist there to be actuated, and even if one's bodies do exist, one would still be effectively swatting in the dark at imagined flies.
Taking this idea a little further, we find the suggestion that we should perhaps generally care less about incarnations of our bodies which are more disconnected from their surroundings, those being less perceptive or less powerful to effect changes. Consider however that this is surely not the principle which informs our intuitive expectations of worldly permanence and our intuitive lack of concern for observer moments that arise in thermal fluctuations: if so, we'd also disbelieve that we could be blinded or demented or paralyzed.
In locales where brains persist long enough to effect biological reproduction, it's little surprise that brains evolve with expectations of sensory persistence. If we endorse the value of our intuitions of sensory persistence, then perhaps the process of evolution which endowed us with those intuitions can also provide us with principles for forming reasoned beliefs regarding how to act in a universe or multiverse large enough to contain many incarnations of our minds in different locales.
"Act as though only those future moments are real in which you might reproduce" doesn't sound like wisdom. What other possible lessons could we abstract from evolution? "Act to achieve good states in worlds where you can do so"? That might be a principle which prescribed avoiding paralysis in EEA, while excluding thermal weirdness. Although it sounds much less evolution-y than the first one.
Complementary to the topic of intuitive dis-belief in fluctuation-death is our intuitive actual-belief in total death - i.e. in the complete cessations of our subject experience, despite arguments suggesting subjective immortality of minds in a big world. And yet the explicit reasons for discounting the decision theoretic value of Boltzmann moments (of seemingly extraordinary death) listed above do not seem to provide complementary reasons to discount extraordinary survival scenarios.
And so once again I am left wondering whether my evolved intuitions are ill suited to thriving in a big world.
We've all heard the quantum immortality thought experiment, and we've mostly all found some reason to not commit quantum suicide. Beyond this, I know almost no sources of relevant advice.
Of course Robin Hanson wrote How To Live In A Simulation, and Eliezer Yudkowsky once made the intriguing suggestion that one should open one's eyes to decrease world simulation measure when bad things happen (and to close one's eyes to reduce simulation load when good things happen). This advice is in the right weirdness neighborhood, but not obviously relevant to taking Boltzmann brains or subjective immortality seriously.