Imagine you have a part of your mind that just keeps track of which visceral experiences you have how often, and then expects more experiences in that ratio. So if you look at pictures of crocodiles, it feels like crocodiles are a bigger part of what is going on in the world. And then if you watch ten youtube videos of people slapping each other in the face, it feels like it is more normal for people to slap each other in the face. If you get up late in the day for a while, it tells you that the world is mostly dark. If you see starving people, it populates its simulated world with starving people (rather than just those magazine pictures of starving people it previously knew about).
‘Visceral’ is vague, but let’s say there are some kinds of experience it can understand, and some it can’t. Anecdotes and pictures and direct experience are intelligible, but it interprets more abstract datasets as ‘sometimes there are abstract datasets’. Like a reinforcement learner which can perceive a large subset of the stimuli that other parts of our minds can respond to, though not all of them.
And suppose that you can even intellectually notice that you are responding badly to seeing a few crocodile pictures, but the kinds of mental parts that can ‘intellectually notice’ things don’t speak any languages that the other part knows, so they can’t just directly fix the problem with explicit efforts. The best they can do is choose to look at a bunch of the most compelling non-crocodile stuff they can find until the other mental part gets the picture. And the whole time you would feel like you have an accurate account of the world.
My impression is that this is what humans are like to some extent, but I don’t know the extent or exact nature of the interaction between this and other ways that humans are. I also don’t know whether this is all a thing that experts have an excellent understanding of, because this is not currently the kind of blog where the blogger does a bunch of research before they write things.
Anyway, if this picture captured an important part of what was going on in the human mind, I might expect a key issue for humans would be strategizing around what kinds of experiences to consume for worldview warping purposes. For instance, this might come up when you are deciding whether to watch ten videos of people slapping each other on YouTube.
People do strategize about this kind of thing a bit. Though I think mostly about people’s behavior, in really extreme cases, or seeking happiness rather than truth. Here are examples I can think of:
- Whether people should watch porn is often considered to rest heavily on how it might change the viewers’ perceptions of normal sexuality and relationships
- Some people argue that others should not play violent video games, on grounds that it might normalize or encourage violence.
- There are a variety of arguments about whether the media or advertising should be different, to change various norms.
- People sometimes avoid experiences that will be substantially upsetting or unpleasant at the time, sometimes in part because it will change their perceptions. For instance, they will feel a bit like they are in a post-apocalyptic world, or they will get the sense that people usually end up destitute in gutters.
- I have heard of people not looking at Facebook because it gives them the impression that every past classmate of theirs just got engaged to a billionaire they met while they were shooting a (critically acclaimed) movie about how exciting their life is, making the prospectiev Facebook viewer’s mixed success at life less bearable.
- People often seek to ‘cheer themselves up’, which arguably means intentionally making their own perception of the world rosier. For instance, they might be cheered up by reading about how a boy saved his father’s life, or looking at fifty pictures of maximally fluffy and small animals. The former is often described as ‘will restore your faith in humanity’, which suggests it is intended to change your understanding of the world, and I hypothesize much more tentatively that the latter is also intended to actually change your perception of how much of the world consists of baby rabbits.
- People sometimes change things in their environment to change their perception of things in their environment, such as themselves. For instance, they put on their pearls and remove the tower of empty soup cans from their floor, to feel like the kind of sophisticated adult who owns expensive jewelry and doesn’t live in a garbage dump (or a modern art museum). This seems related, though changing your perceptions of your environment by changing your environment does actually cause your environment to be different, so it seems like a marginal case.
- I think I sometimes try to interact with some intellectual sphere a bit in order to feel like it is a happening place, to encourage myself to interact with it more. Though this is not very explicit. For instance, if I want to think about the kinds of things I might blog about, I might look at some other related blogs and remind myself that that corner of intellectual cyberspace is real and exciting.
These cases are all either involve very extreme and immediate corrections, desire to meddle with someone else’s behavior, or efforts to feel better about the world rather than to view it more accurately. The kinds of things I have in mind would be more like:
- Thinking twice about being entertained by fiction that depicted the world inaccurately, especially in subtle or harmful ways. For instance, TV shows in realistic settings where most people are untrustworthy, cooperation is doomed to fail, and reasonable people invest a lot in watching their backs. Or where things always work out too tidily. Or where ‘romance’ and ‘friendship’ are different nebulous bundles of behaviors and commitments and such to the ones you would want to think of them as.
- Thinking twice about being entertained by fiction that merely depicts the world as conforming to story norms.
- Even in non-fiction, avoiding habitual interaction with framings, emphases and narratives that you don’t want to increase your own belief in the importance of. For instance, if you don’t want to wonder if the world is full of leopard seals, or think that the world is full of people who are interested in the question of whether the world is full of leopard seals, don’t watch the crazy ‘all of our problems are caused by leopard seals’ channel, even ironically or in the intellectual knowledge that you had to search the whole internet to find such an oddity.
- You might preferentially associate with people who discuss the world around them in terms of stories that you prefer believe in. For instance, if you are around people who often draw attention to the world’s mystery, this might push your model of the world toward being intrinsically incomprehensible. While conversation partners who habitually talk about everything as if it has intelligible parts, that might teach you very different expectations.
- If you haven’t seen any failure in some arena because it seems so unimaginable that you don’t take risks, you might try failing a bit intentionally or closely observing someone else’s failures, to acquire a sense that failing is a thing you can actually do, with some specific non-world-ending consequences.
- If you experience a weird corner of the world relative to other humans, you might just try to spend some time on a more representative sample of activities or places or company. You might look up how other people spend time, note that about 1% of people in Amercia are truck drivers, and try to ride in a representative truck one time.
- If you think a lot of the world is unpopulated and you spend your time in populated areas almost exclusively (probably true, due to the observer selection effect), you might go to the empty bits if you want a realistic impression of what the physical world is like.
I think I occasionally hear these kinds of considerations raised, and maybe acted on, though it is hard to think of examples, other than people sometimes intentionally spending more time with people who nebulously seem like good influences, which might embody some such things.
In the wake of the recent US election, I have heard people talking about mingling more with people from different bubbles. Which also sounds maybe close, but I think they are mostly suggesting talking to political rivals about their explicit views and trying to understand where they are coming from and to empathize with them. I’m not talking about anything so intellectual or socially virtuous—I’m just talking about bumping into people who vote differently often enough that your intuitions register their existence. Which is arguably less of a big deal for characteristics that define political divides, because you are probably aware of your political rivals’ existence by the time you are rivaling them. And if not, the media will probably tell you about them. Whereas if you never see any truck drivers, you could easily forget to consistently imagine that there are three million of them around here somewhere. Even if you read a statistic about it once, and could maybe figure out a decent guess if someone directly asked you ‘how many truck drivers are there in America?’ And the existence of three million truck drivers probably comes up sometimes, like when speculating about the implications of self driving cars.
So anyway, I claim that this kind of strategizing about experience consumption mostly comes up in fairly extreme or immediate cases, or cases where the costs are to someone else, or in order to improve the enjoyability of one’s worldview rather than its accuracy. I’m not very confident about this. But supposing this is true, it might be because the effects are too small to make it worth thinking about (and other people know that, while I don’t). It could also be that other people are thinking about these things a lot more than I think they are, and they don’t discuss them much, or I forget the good examples.
An interesting explanation is that such strategizing would indeed be very useful, but we mostly don’t do it because it is only strategic from the perspective of the more intellectual parts of our minds. Those parts would like to correct our crazy instinctive picture of the world in pursuit of their own abstract goals. However the part we have been talking about—the ‘visceral picture of the world based on direct observations and stories’ part— doesn’t have any picture of a gap between an accurate abstract world model and its own model. That’s too abstract for a start, and representing “model X is badly inaccurate” inside model X is at least a bit complicated. And our more intellectual mental parts have trouble finding any experience that would really hammer home the fact of this gap. So we don’t really feel like it is a big deal, though it seems like it might be intellectually. This also matches the way people discuss this kind of thing intellectually, but don’t seem to do much about it.