I’ve come to find that there’s a computational superpower that comes from believing in something commonly called ‘good’.
My belief in good, which is extremely useful, works like this:
What I believe
we are all traveling through some space of possibilities; many things could be, but only a tiny fraction of those things are
my choices are the only control I have over how I navigate the space of possibilities
there is a meaningful direction through the space of possibilities, called “good”
“good” is, roughly speaking, a compass pointing from the past, towards distant future possibilities which are far more complex and more alive than the present
critically: the distant future possibilities that are most good (i.e., most alive) are clustered together:
e.g. most good futures, hundreds of thousands of years out, involve people travelling in the galaxy, after having built some crazy energy source, like fusion power followed by a Dyson sphere around the sun
e.g. most futures that are good, for me, involve me being healthy, calm, loving, playful, wealthy and tough
evolutionary fitness, individual emotions, corporate incentive structures and financial economies all roughly approximate “good”, but none of these things are good itself, because perfect goodness itself is incomputable
When I feel good, I gain more energy, which I can then use to do more good, leading to a positive feedback loop.
Similar loops exist at larger scales, for groups of agents cooperating. Living organisms - made of cells, people, or memetic machines - all depend on the operation of a loop like this to keep themselves alive.
Problems with Not Believing ‘Good’ Means Something Real
When I didn’t believe good a was unified direction, trying to act rationally meant ‘acting in pursuit of my goals’. This was hard because my goals were just some arbitrary function to which the universe was totally indifferent. Because my actions and goals had no simple, meaningful relationship to the likely course of the future, what I had to do when evaluating choices was try to look at each choice individually and then evaluate ‘how good it is.’
This approach is hard because it requires:
an accurate cause and effect understanding of the world plus, the entire current state of the world
evaluating all the possible future worlds that could result from your choices, in terms of ‘how good those are’
There are some obvious wins here, like donating to charities that save lives by giving out cheap treatments for parasites, or distributing malaria nets. Any thinking person with concern for human welfare can see that donating some amount to an effective altruism charity is obviously a good course of action.
Maybe I'm just arrogant, but that didn’t feel like enough good for me. I want to donate one hundred times as much as I am right now! I don’t think people who have 10 times the wealth I do are 10 times smarter, more capable, more hard working than I am. I want to earn a billion dollars so I can give most of it away. I know enough people who’ve made it to be billionaires that it seems at least worth trying. Even if “all” I manage to accomplish is reach a net work of $100 million and then give $9 million a year to charity, until I die and give the rest away, isn’t that a fine outcome?
But doing that would require focusing more on work, growing my career, and thus possibly donating less now in order to hire more help so I can put more energy into work. What’s the trade tradeoff to make when comparing ‘possible future good’ vs ‘present good’?
On top of that, I have young kids and aging parents. How do I trade off “a $4000 donation that saves one life, vs a month of more babysitters, so I can possibly earn more money in the future,” while adding in “I want to spend quality time with my kids while they are young,” and “I want to take care of my parents while they age, and they are lonely and want to see me more.” Hiring babysitters and home health aides doesn’t give my parents and kids the feeling that they are loved by me, and matter to me. How much should I value that vs. saving another kid’s life?
I don’t want to be an absentee father who works really hard and never sees his kids, even if my explanation to them is that each four thousand dollar donation can save the life of some other kid far away. Yeah, sure, duh, of course I’ll save a kid from drowning right next to me. Obviously. But what about the fact that kids are always drowning and pulling other children from the water leaves me too tired to play tickle monster with my own kids?
How do I trade off saving one more kid from drowning vs playing tickle monster with my own kids? I can’t pretend this isn’t a real tradeoff I face. How can I know how much time they really need from me, vs how much is just extra on top?
I care so much about so many different things, all of which require my time and attention, as well as requirements to trade off doing some good now vs investing in my capacity to do more good in the future. This is important to me to get right. How can I possibly weigh tradeoffs between these sacred things?
If you think of “good” as being “an arbitrary function computed on future states of the world”, evaluating all kinds of different choices becomes an exercising in trying to weigh all kinds of unknowable, unquantifiable, desirable things against each other.
How could you possibly know you’ve gotten it right?
Belief in Good Dramatically Lowers Compute Requirements for Doing Good
The simple answer here for me was to believe that sacred actually means something different from “what I really, really, really, really want.” If my sense of the sacred is, like any other sense, detecting some property of reality, then I can work on calibrating it, and then relying on my senses and intuition aided by reason and logic. I can then trust, not myself, but the goodness property of reality, to carry me towards better futures.
My sense of smell is a bunch of computation layered on top of olfactory nerves. This sense helps me know what good things are nearby and what things I ought to avoid. I think the sense of the sacred is also a computational layer, informing me of some aspect of physical reality. And man, does this work.
If you think there are multiple good futures which are disjoint from each other, sealed off from each other forever, then each fork in the road you come to becomes an exercise in damning something good to permanent nonexistence. Each few days worth of time I spend playing tickle monster with my kids, I condemn some stranger’s kid to drown. Each time I pull a drowning stranger from the pond, I need to spend several days away from my kids and parents, telling them both, yes, I love you, but I must help these other people. I can’t sustain that without feeling sad, lonely, and burnt out.
Without a belief that good is unified and points in one coherent direction, and that my feelings tell me how i’m moving with respect to this direction, the cognitive load of trying to compute the most good path becomes insane. I would need infinite confidence in both my understanding of the cause-and-effect world, as well as my understanding of whatever function I’d be using to evaluate how good world states are, to avoid spending all my time worrying about whether I’m making the wrong choices.
Maybe some people can pull that off, but for me, thinking in that matter left me a jittery, anxious wreck. This notion of good is like trying to build a rocket ship targeting a specific star system. Building a rocket is hard enough as is, without trying to figure out which star system is correct.
The alternative is having an attitude that says ‘I don’t need to choose distant star systems, let me simply get up there and then I’ll figure out the next step when it comes along.’ This kind of thinking is only possible if you think up mean something. Of course, we can see the stars and measure them, so this is far easier to believe about ‘up’. What I’m doing is applying the same kind of thinking to ‘good’.
The belief that goodness is an inherent path that exists, and I just need to keep trying to point myself roughly in that direction, says something like,
“you are indeed balancing 300 different sacred subgoals that trade off each against each other, but don’t worry about getting it perfectly right. Your ability to do good is constrained by how good you feel, so taking time off to rest instead of pulling more kids from the pond is fine, because even in the best, most great possible distant future, there will likely still be kids drowning in ponds, but there’ll even more ponds, even more kids, even more choices, even more good things, and even more awful experiences all at the same time.
This doesn’t mean things don’t get better! Look back 400 years ago in history and evaluate that world compared to our world today. Would you want to live in their world, or ours? Their world was more awful than ours, so why should they feel more confident in progress that you have actually seen? Yet our world today was largely built by people back then advancing their notion of what good was. You are living in a spaceship they built, and yet you don’t think the concept of space ships means anything. We are pulling kinds from ponds with technology that didn’t exist back then, and yet somehow you should be more surprised by the present that you live in, than the people who built towards it without every seeing it!
The only way this makes sense is if their belief in good actually meant something.
Your only real choice is how much you help build towards the future, if at all. If taking care of those around you makes you feel better, then do that. But if you feel compelled to reach for the stars, do that too - don’t be afraid, just follow your well calibrated heart to maximize the good you can do.
Is this a reasonable thing to believe?
As far as results, I can promise it’s working for me. When I didn’t believe ‘good’ meant anything, when I thought of myself as a biological robot following evolutionary imperatives and cultural dictates, I was miserable. I was making terrible choices. I don’t think this should be a surprise.
Once I started believing in this idea of good, the quality of my choices started improving, and my life did as well. The life I have now is far better than anything I could have dreamed up years ago. To me, this seems like further evidence that “imagine the best future you can and then retroactively construct a path to get there” might be fine in for closed, predictable systems like games, but doesn’t really work as a strategy for navigating the game of life as a human being.
Many of our ancestors followed strategies based upon the ideas I’ve listed above. How well did those strategies work for them? Simply learning enough history, and then imagining myself living through that history has convinced me that:
history has always seemed wild and scary for those living through it, and yet the world we live in now is far, far, better than the wildest dreams of our ancestors
it’s very difficult to imagine a strategy that consistently outperforms, “invest in relationships with those around you, learn a valuable skill, live beneath your means, refuse to worry about things beyond your control, and focus instead on being the best person you can be”
Ultimately, you’ll have to decide for yourself if this way of thinking makes sense. If you find yourself struggling with the problems I described above, try ‘playing around with’ the hypothesis and see if it works for you. Try pretending “goodness” means something, try to make yourself more good, and see if your life becomes fuller and richer.
I would love to hear about the results of your experiment :)
Just wait for when you make it beyond belief into knowing...
It seems to me that this belief can be summed up as "It's okay to be a goodness satisficer, not a goodness maximizer. Some things in life are supererogatory, not mandatory. We just have to get close enough, because good things are like the breadth of the sky rather than the stars in space: you can't miss as long as you're going in the right direction."
In other words, a perfectionist's acceptance of the demandingness objection (https://en.wikipedia.org/wiki/Demandingness_objection), rejecting perfection in favor of good enough. Would you say that this is accurate? Possibility space may be vast and full of horrors, but the set of paths that take us to the 'good endings' are vaster still. And, crucially, they stick together as a ray of light in the dark. Any path can be the right one as long as it heads towards the light instead of the darkness. Any destination will do, because all the destinations can be connected by another path through the light. You just need that guiding light - and your own two feet to start walking.
(Or more prosaically, heuristics are great when the computation problem is intractable.)
(Also, you may want to get checked for Pure O OCD: https://www.mind.org.uk/information-support/types-of-mental-health-problems/obsessive-compulsive-disorder-ocd/symptoms-of-ocd/. Stuff like "you are indeed balancing 300 different sacred subgoals that trade off each against each other..." makes it sound like you're suffering from, or suffered from, intrusive thoughts about not doing enough good.
And, well, your style of writing here makes it sounds like you're still being bothered by this, such that you can't relax and write something easygoing. All the italics just sound... tense.)