Really enjoyed this! Thanks for writing it up. I'd love to see some more links out to sources (e.g. was surprised to see an article on map vs territory that doesn't mention https://en.wikipedia.org/wiki/Alfred_Korzybski)
A few things stood out:
> There is only one Territory
You could call this true by definition, but it's interesting to consider ways it might be untrue. Could there be several, causally-disconnected territories out there? Could reality be composed of many disparate but overlapping subjective realities, as Schrödinger thought [1]?
> no map can accurately, and completely, represent the gap between that map and the territory
There's a great Borges story about a map that completely covers its territory. Fantasy, obviously, but worth a read! I think you'd like Borges in general.
> However, because the maps in our brains are physical objects, they constantly follow their equation of motion and thus move along energy gradients toward their lowest energy state
Is this true? People are open systems (we literally ingest energy), and most physical laws are stated in ways that assume closed systems. We certainly don't just follow the energy gradient--we're able to kick ourselves out of local minimal pretty easily. Otherwise I'd never get off this couch!
If you wanted to model a person as a (closed) physical system moving through a phase space, you'd probably have to include the Sun.
> Emotions have a valence: positive or negative. Our maps move towards positive feelings, and away from negative feelings. Conflict - inconsistencies in our maps - often feels unpleasant (i.e. has a negative valence).
>
> Our brains attempt to move away from the negative feeling induced by conflict, either by changing the position of physiology (distracting ourselves, looking away, walking away, running away) or else by adding, removing, or modifying existing beliefs.
I assume you're getting at something like Predictive Coding here. I disagree that we avoid conflict--we actually seek it out! We actively look for information that rests on the edge of predictability. We _really_ like surprise that can be integrated into our maps, the bigger the surprise the better (e.g. misdirection in humor, horror movies, doom scrolling...pretty much all media).
This is the first time i've heard of Korzybski, so i'll add him to the reading queue. I had encountered 'map' and 'territory' a lot and never thought about where these terms came from. I used to think i had new ideas. But what i am finding, the more i read, is that practically every thought i think i've 'developed on my own' has already been through before, often multiple times.
> Is this true? People are open systems (we literally ingest energy), and most physical laws are stated in ways that assume closed system
The traversal of energy gradients following the equation of motion is not, i think, one of them.
> We certainly don't just follow the energy gradient--we're able to kick ourselves out of local minimal pretty easily. Otherwise I'd never get off this couch!
This is a question i had for myself for a long time. The conclusion i reached was that 'getting off the couch' is, firstly, descending a chemical energy gradient. So from an energy perspective, you getting up or down from the couch isn't _that_ different from an elevator going up or down; there's a counterweight descending so that there is a change, but it's not huge.
Second, i think the reason you get up from the couch is that you _want_ to. That wanting is, i think, manifest as an energy potential in your brain, because your brain is doing some 'energy minimization thing', and the disconnect between what you want and where you are now corresponds to the magnitude of some energy state in your brain.
At least, this is my understanding. I'm not a neuroscientist or anything.
> e actively look for information that rests on the edge of predictability.
I think we are doing this, in effect, to _reduce_ conflict between 'expectation of a certain amount of novelty' and 'present amount of novelty'. The key idea i'm trying to get out is that reducing conflict globally might look a lot like like dramatically increasing conflict locally.
> But what i am finding, the more i read, is that practically every thought i think i've 'developed on my own' has already been through before, often multiple times.
Tell me about it! I've come to see myself not as someone who is going to create new ideas de novo, but as someone who can digest some of the best ideas out there and repackage them in new and interesting ways.
Historically, even the greats did this. Einstein pulled heavily from Minkowski and Schopenhauer to develop relativity, and Darwin basically just found solid evidence for his grandfather's theory of evolution.
Better to stand on the shoulders of giants!
> reducing conflict globally might look a lot like like dramatically increasing conflict locally
Yeah this is an interesting point. It's one of the hardest problems with goal-oriented algorithms--how much should you sacrifice short-term goals for long-term ones? Is it worth it to sacrifice a pawn to advance your queen? Sometimes there are clearly optimal long-term decisions, but I think there are times where the choice is ambiguous, especially when we're trying to optimize more than one variable.
Nice, I love watching people wrestle with this stuff. :)
First, I was confused by the footnotes. They seem important to me, and dashing back and forth is a hassle, I would just put them inline.
Second, this is pretty dense. I like it, but tastes vary.
Third, consider rewriting this in E'. I have had a lot of good luck processing and communicating about this stuff by just removing all instances of the verb "to be." Google E-Prime for more details. This is controversial, but I always find that the exercise of removing all the "is" words forces me to clarify what I actually mean, and be less hand-wavy overall. It's work, for sure, and I'm not sure you can avoid it 100% if you, like, literally define terms, but I'm curious if it helps you like it does me.
Finally, the content seems pretty sound to me. I stopped using the word "Godel" (I've come to believe it must sound a lot like the words to some ancient spell that summons Daemons of Irrelevant Pedantry) and talk about embedded agency, instead. The key insight, as you point out, is that the territory contains the map, so you can never be sure of the map's accuracy. And we can analyze lots of ways to live with that limitation, and there is a lot of devil in those details, but they all seem, in the end, to *compress* the territory by removing redundancies. So I've stopped thinking about "maps" and instead I think of compressed representations of the territory. Some beliefs are utter BS, and you spend a bunch of time talking about that, and that's fair - it's an important and common situation.
But these day's I'm more interested in how to make the map more accurate, in theory and in practice. Many compressed representations are lossy (Freudian psychoanalysis) but then again many lossy representations are still useful (gardening, classical mechanics.) Some compressed representations require so much heavy processing ("decompression") that we prefer the lossy results in all but a few narrow use cases (molecular biology, quantum mechanics.)
Lossless compression - i.e., the true Laws of the Universe - is theoretically possible, but we'll never know for sure if we found it. This was Popper's gift to science, and I take great comfort that no AI, no Moloch, no alien Omega handing out boxes, can ever know for sure, either.
I have more to say, but time is short. Thanks for the writeup!
Thanks for this comment - i really enjoyed it! I am starting to find that my 'target audience' is something like a tiny sliver of the rationalist / less wrong community, which is both inspiring but also a little depressing to feel that so few people are 'getting' what i'm trying to say. But, clearly, you do!
I love the part about "Daemons of Irrelevant Pedantry".
I also agree with that 'compression' representation, and yeah, that seems like the rub. In particular, lossier compressions are often cheaper to execute ,and so they are often what you want if you plan to _act_ inside the world, instead of standing outside of it, freezing time, and then executing some logic as many people like to imagine an AI would be doing.
> Lossless compression - i.e., the true Laws of the Universe - is theoretically possible, but we'll never know for sure if we found it
One of my beliefs lately is that the pattern of "trying to represent the map, the territory, myself, my goals, etc" is ultimately instrumental to getting myself to produce the right actions. The reason I create models and generate predictions and weight actions based upon EV's is, ultimately, to getting myself to act in specific ways that promote specific outcomes.
But what if you don't _ need_ to model any of that stuff, most of the time, to produce actions that generate outcomes which are _Better_ for you than anything you could have predicted? Experimentally, i am finding that this is the case. The one thing I know I can control, moment to moment, is my attitude. And simply trying to maintain an attitude of seems unreasonably effective at making me act in ways that lead to outcomes which are far better than anything i could have imagined. I don't mean this in terms of worldly success or finance, i mean this in terms of harmonious life with my family, acceptance of what i don't know, less anxiety, more peace, more confidence, better health, etc.
This is what makes me think that the Territory has something like a 'central feature', and that this central feature is both an extremely compressed representation of the entire Territory, and what some would call 'God' and other would call 'God'.
I hear you about the limited audience for this stuff. I don't quite know what to make of that.
As for the rest, I get stuck on what people mean by "good." Like, when you say "far better outcomes," can you explain what you mean? If you compare two possible outcomes, can you unambiguously state the rules for how you can tell which outcome is "better?"
So, this was, for most of written history, the most important question a person could ask. Then, maybe in the last 100 years ago, it became a mark of being unsophisticated to ask this question.
I spent a good decade wandering in the philosophical wilderness trying to figure out what 'good' mean, because as best i could tell, from a scientific perspective, it _didn't_ mean anything. I really, really wanted to be a good person but i had no idea what that meant, if anything. It was, distressing, to put it mildly. Eventually i stumbled on something that seemed to work for me, was physics based, and then dovetailed into what a lot of religions seemed to be saying. I can talk more about that if you want, but the short version is, "well, you probably have some intuition about what good is already; instead of trying to increase the resolution on it for the purpose of long range planning, try following that more rigorously on a moment by moment basis, and then see what happens as a result of that experiment."
By 'far better outcomes' I mean things like... situations that improve in ways that I hadn't even considered they could improve.
An example here is, having recently returned home to Ohio, during covid. I moved here primarily because I wanted to be closer to my parents. My Dad recently passed away[1], and I'm really glad that I was here in Ohio, and got to see him many more times than I otherwise would have. As a result of being here, some other things have come about, like, I've gotten in contact with an old group of friends from high school and even grade school. It's been amazing to be part of this group of people, some of whom i've known for more than 75% of my life.
We used to live in a neighborhood in california where most people were renters, there were few children, and nobody had much of a yard to speak now. Now our neighborhood is like a nature park, there's a ton of kids, and we've made friends with lots of people we see on a regular basis.
So it's not that i compared two options and said, 'option A looks better, for the following factors.' We did what we thought the right thing was - to be close to my Dad at the end of his life was something that seemed obviously correct to both of us. We didn't bother to debate the pros and cons or work out all the implications - my wife and i made the decision basically because it seemed like the good thing to do, and then worked out the details later.
The end result is that this choice has lead to a huge number of benefits that i'd not have been able to predict or imagine in advance.
I lost my religion at a young age, around 8 IIRC, and even then I recognized the major things I got from religion: a social life, and a sense of right and wrong. The social life wasn't interesting to me, but I felt the morality-shaped hole inside me. After a little childish floundering, I came to a similar conclusion as you: just go along, at least for now, with my vague intuitive sense of right and wrong. Keep searching, and maybe someday I'll find a real answer. But I never ever thought that "intuition" was a real answer.
I found the kernel of my real answer, ironically, in a poem by C.S.Lewis called Evolutionary Hymn. The poem makes fun of the whole idea of evolution, and there's one line that says "Goodness = what comes next"
and it blew my mind, when I wondered, what if I bite that bullet? And over the years, handicapped (or maybe enabled?) by my lack of formal training in philosophy, I've built up from that kernel into an interesting ethical system that doesn't require intuition to tell right from wrong. The usual intuitions cash out as a special case: just like our physical intuitions, which evolved in the ancestral environment to make us pretty good at chucking spears, our moral intuitions evolved in the ancestral environment to make us pretty good at escaping Nash equilibria in iterated prisoners' dilemmas. This is evolutionarily advantageous because although the key *fact* of evolution is survival of the fittest, the key *tension* is between selfish individuals (who outcompete collaborative ones) vs collaborative groups (who outcompete selfish ones.)
So our moral intuitions work well to make us collaborate in Dunbar-sized tribes. Those intuitions fail us more and more often, as our world looks less and less like the ancestral environment. We need new physical intuitions as we harness quantum effects for our daily routines; likewise, we need new moral intuitions as we collaborate with people, distant from us in time and space, whom we will never meet.
(To be explicit, I basically propose that "true" morality is behavior that leads to continued existence. Naively, this implies selfishness. With a little more thought it becomes clear that this implies collaboration -- and ever more sophisticated collaboration as our civilization matures.)
Really enjoyed this! Thanks for writing it up. I'd love to see some more links out to sources (e.g. was surprised to see an article on map vs territory that doesn't mention https://en.wikipedia.org/wiki/Alfred_Korzybski)
A few things stood out:
> There is only one Territory
You could call this true by definition, but it's interesting to consider ways it might be untrue. Could there be several, causally-disconnected territories out there? Could reality be composed of many disparate but overlapping subjective realities, as Schrödinger thought [1]?
> no map can accurately, and completely, represent the gap between that map and the territory
There's a great Borges story about a map that completely covers its territory. Fantasy, obviously, but worth a read! I think you'd like Borges in general.
> However, because the maps in our brains are physical objects, they constantly follow their equation of motion and thus move along energy gradients toward their lowest energy state
Is this true? People are open systems (we literally ingest energy), and most physical laws are stated in ways that assume closed systems. We certainly don't just follow the energy gradient--we're able to kick ourselves out of local minimal pretty easily. Otherwise I'd never get off this couch!
If you wanted to model a person as a (closed) physical system moving through a phase space, you'd probably have to include the Sun.
> Emotions have a valence: positive or negative. Our maps move towards positive feelings, and away from negative feelings. Conflict - inconsistencies in our maps - often feels unpleasant (i.e. has a negative valence).
>
> Our brains attempt to move away from the negative feeling induced by conflict, either by changing the position of physiology (distracting ourselves, looking away, walking away, running away) or else by adding, removing, or modifying existing beliefs.
I assume you're getting at something like Predictive Coding here. I disagree that we avoid conflict--we actually seek it out! We actively look for information that rests on the edge of predictability. We _really_ like surprise that can be integrated into our maps, the bigger the surprise the better (e.g. misdirection in humor, horror movies, doom scrolling...pretty much all media).
See also: https://astralcodexten.substack.com/p/jhanas-and-the-dark-room-problem
[1] https://superbowl.substack.com/p/church-of-reality-schrodinger-believed#%C2%A7intersubjective-reality
Thanks for the comment!
This is the first time i've heard of Korzybski, so i'll add him to the reading queue. I had encountered 'map' and 'territory' a lot and never thought about where these terms came from. I used to think i had new ideas. But what i am finding, the more i read, is that practically every thought i think i've 'developed on my own' has already been through before, often multiple times.
> Is this true? People are open systems (we literally ingest energy), and most physical laws are stated in ways that assume closed system
The traversal of energy gradients following the equation of motion is not, i think, one of them.
> We certainly don't just follow the energy gradient--we're able to kick ourselves out of local minimal pretty easily. Otherwise I'd never get off this couch!
This is a question i had for myself for a long time. The conclusion i reached was that 'getting off the couch' is, firstly, descending a chemical energy gradient. So from an energy perspective, you getting up or down from the couch isn't _that_ different from an elevator going up or down; there's a counterweight descending so that there is a change, but it's not huge.
Second, i think the reason you get up from the couch is that you _want_ to. That wanting is, i think, manifest as an energy potential in your brain, because your brain is doing some 'energy minimization thing', and the disconnect between what you want and where you are now corresponds to the magnitude of some energy state in your brain.
At least, this is my understanding. I'm not a neuroscientist or anything.
> e actively look for information that rests on the edge of predictability.
I think we are doing this, in effect, to _reduce_ conflict between 'expectation of a certain amount of novelty' and 'present amount of novelty'. The key idea i'm trying to get out is that reducing conflict globally might look a lot like like dramatically increasing conflict locally.
> But what i am finding, the more i read, is that practically every thought i think i've 'developed on my own' has already been through before, often multiple times.
Tell me about it! I've come to see myself not as someone who is going to create new ideas de novo, but as someone who can digest some of the best ideas out there and repackage them in new and interesting ways.
Historically, even the greats did this. Einstein pulled heavily from Minkowski and Schopenhauer to develop relativity, and Darwin basically just found solid evidence for his grandfather's theory of evolution.
Better to stand on the shoulders of giants!
> reducing conflict globally might look a lot like like dramatically increasing conflict locally
Yeah this is an interesting point. It's one of the hardest problems with goal-oriented algorithms--how much should you sacrifice short-term goals for long-term ones? Is it worth it to sacrifice a pawn to advance your queen? Sometimes there are clearly optimal long-term decisions, but I think there are times where the choice is ambiguous, especially when we're trying to optimize more than one variable.
As above, so below. As within, so without. All is mind.
Nice, I love watching people wrestle with this stuff. :)
First, I was confused by the footnotes. They seem important to me, and dashing back and forth is a hassle, I would just put them inline.
Second, this is pretty dense. I like it, but tastes vary.
Third, consider rewriting this in E'. I have had a lot of good luck processing and communicating about this stuff by just removing all instances of the verb "to be." Google E-Prime for more details. This is controversial, but I always find that the exercise of removing all the "is" words forces me to clarify what I actually mean, and be less hand-wavy overall. It's work, for sure, and I'm not sure you can avoid it 100% if you, like, literally define terms, but I'm curious if it helps you like it does me.
Finally, the content seems pretty sound to me. I stopped using the word "Godel" (I've come to believe it must sound a lot like the words to some ancient spell that summons Daemons of Irrelevant Pedantry) and talk about embedded agency, instead. The key insight, as you point out, is that the territory contains the map, so you can never be sure of the map's accuracy. And we can analyze lots of ways to live with that limitation, and there is a lot of devil in those details, but they all seem, in the end, to *compress* the territory by removing redundancies. So I've stopped thinking about "maps" and instead I think of compressed representations of the territory. Some beliefs are utter BS, and you spend a bunch of time talking about that, and that's fair - it's an important and common situation.
But these day's I'm more interested in how to make the map more accurate, in theory and in practice. Many compressed representations are lossy (Freudian psychoanalysis) but then again many lossy representations are still useful (gardening, classical mechanics.) Some compressed representations require so much heavy processing ("decompression") that we prefer the lossy results in all but a few narrow use cases (molecular biology, quantum mechanics.)
Lossless compression - i.e., the true Laws of the Universe - is theoretically possible, but we'll never know for sure if we found it. This was Popper's gift to science, and I take great comfort that no AI, no Moloch, no alien Omega handing out boxes, can ever know for sure, either.
I have more to say, but time is short. Thanks for the writeup!
Thanks for this comment - i really enjoyed it! I am starting to find that my 'target audience' is something like a tiny sliver of the rationalist / less wrong community, which is both inspiring but also a little depressing to feel that so few people are 'getting' what i'm trying to say. But, clearly, you do!
I love the part about "Daemons of Irrelevant Pedantry".
I also agree with that 'compression' representation, and yeah, that seems like the rub. In particular, lossier compressions are often cheaper to execute ,and so they are often what you want if you plan to _act_ inside the world, instead of standing outside of it, freezing time, and then executing some logic as many people like to imagine an AI would be doing.
> Lossless compression - i.e., the true Laws of the Universe - is theoretically possible, but we'll never know for sure if we found it
One of my beliefs lately is that the pattern of "trying to represent the map, the territory, myself, my goals, etc" is ultimately instrumental to getting myself to produce the right actions. The reason I create models and generate predictions and weight actions based upon EV's is, ultimately, to getting myself to act in specific ways that promote specific outcomes.
But what if you don't _ need_ to model any of that stuff, most of the time, to produce actions that generate outcomes which are _Better_ for you than anything you could have predicted? Experimentally, i am finding that this is the case. The one thing I know I can control, moment to moment, is my attitude. And simply trying to maintain an attitude of seems unreasonably effective at making me act in ways that lead to outcomes which are far better than anything i could have imagined. I don't mean this in terms of worldly success or finance, i mean this in terms of harmonious life with my family, acceptance of what i don't know, less anxiety, more peace, more confidence, better health, etc.
This is what makes me think that the Territory has something like a 'central feature', and that this central feature is both an extremely compressed representation of the entire Territory, and what some would call 'God' and other would call 'God'.
I hear you about the limited audience for this stuff. I don't quite know what to make of that.
As for the rest, I get stuck on what people mean by "good." Like, when you say "far better outcomes," can you explain what you mean? If you compare two possible outcomes, can you unambiguously state the rules for how you can tell which outcome is "better?"
> I get stuck on what people mean by "good."
So, this was, for most of written history, the most important question a person could ask. Then, maybe in the last 100 years ago, it became a mark of being unsophisticated to ask this question.
I spent a good decade wandering in the philosophical wilderness trying to figure out what 'good' mean, because as best i could tell, from a scientific perspective, it _didn't_ mean anything. I really, really wanted to be a good person but i had no idea what that meant, if anything. It was, distressing, to put it mildly. Eventually i stumbled on something that seemed to work for me, was physics based, and then dovetailed into what a lot of religions seemed to be saying. I can talk more about that if you want, but the short version is, "well, you probably have some intuition about what good is already; instead of trying to increase the resolution on it for the purpose of long range planning, try following that more rigorously on a moment by moment basis, and then see what happens as a result of that experiment."
By 'far better outcomes' I mean things like... situations that improve in ways that I hadn't even considered they could improve.
An example here is, having recently returned home to Ohio, during covid. I moved here primarily because I wanted to be closer to my parents. My Dad recently passed away[1], and I'm really glad that I was here in Ohio, and got to see him many more times than I otherwise would have. As a result of being here, some other things have come about, like, I've gotten in contact with an old group of friends from high school and even grade school. It's been amazing to be part of this group of people, some of whom i've known for more than 75% of my life.
We used to live in a neighborhood in california where most people were renters, there were few children, and nobody had much of a yard to speak now. Now our neighborhood is like a nature park, there's a ton of kids, and we've made friends with lots of people we see on a regular basis.
So it's not that i compared two options and said, 'option A looks better, for the following factors.' We did what we thought the right thing was - to be close to my Dad at the end of his life was something that seemed obviously correct to both of us. We didn't bother to debate the pros and cons or work out all the implications - my wife and i made the decision basically because it seemed like the good thing to do, and then worked out the details later.
The end result is that this choice has lead to a huge number of benefits that i'd not have been able to predict or imagine in advance.
[1] https://www.mrfh.com/obituary/barry-neyer
I lost my religion at a young age, around 8 IIRC, and even then I recognized the major things I got from religion: a social life, and a sense of right and wrong. The social life wasn't interesting to me, but I felt the morality-shaped hole inside me. After a little childish floundering, I came to a similar conclusion as you: just go along, at least for now, with my vague intuitive sense of right and wrong. Keep searching, and maybe someday I'll find a real answer. But I never ever thought that "intuition" was a real answer.
I found the kernel of my real answer, ironically, in a poem by C.S.Lewis called Evolutionary Hymn. The poem makes fun of the whole idea of evolution, and there's one line that says "Goodness = what comes next"
and it blew my mind, when I wondered, what if I bite that bullet? And over the years, handicapped (or maybe enabled?) by my lack of formal training in philosophy, I've built up from that kernel into an interesting ethical system that doesn't require intuition to tell right from wrong. The usual intuitions cash out as a special case: just like our physical intuitions, which evolved in the ancestral environment to make us pretty good at chucking spears, our moral intuitions evolved in the ancestral environment to make us pretty good at escaping Nash equilibria in iterated prisoners' dilemmas. This is evolutionarily advantageous because although the key *fact* of evolution is survival of the fittest, the key *tension* is between selfish individuals (who outcompete collaborative ones) vs collaborative groups (who outcompete selfish ones.)
So our moral intuitions work well to make us collaborate in Dunbar-sized tribes. Those intuitions fail us more and more often, as our world looks less and less like the ancestral environment. We need new physical intuitions as we harness quantum effects for our daily routines; likewise, we need new moral intuitions as we collaborate with people, distant from us in time and space, whom we will never meet.
(To be explicit, I basically propose that "true" morality is behavior that leads to continued existence. Naively, this implies selfishness. With a little more thought it becomes clear that this implies collaboration -- and ever more sophisticated collaboration as our civilization matures.)