Mental minimalism


It is nice to work in other people’s spaces relative to mine, because I am not supposed to interact with their stuff. Having fewer affordances reduces mental noise, in the same way that minimalism does. You can have something like the mental effects of minimalism even in a pigsty, just by firmly refusing to believe that pigs can be interacted with. Objects become scenery.

This suggests that instead of picking up my things, I could just make a solemn promise that I will not pick up my things. I am yet to run the experiments.

If this hypothesis is true, it might have implications beyond optimal tidying schedules. For many things in the world, I wish I felt more affordances. I wish they didn’t feel like wallpaper. Also, the world seems extremely noisy and cluttered. Maybe these things are related. Maybe if you can only cope with so much noise and clutter, then you have to demote affordances, to make the world livable. In that case, maybe there are other ways to increase clutter-tolerance, that would allow more affordances.

Wanting the destination in a journey-based community

In a relationship centered around helping one another improve, there is a risk that after both have improved, one person becomes unable to be helped by the other.

Similarly, in a community centered around helping one another improve, there is a risk of succeeding enough that self-improvement is no longer a dominant concern in one’s life. In fact, hopefully everyone will at some point be sufficiently improved that it is time to get on to the actual project they were improving for. At least on a model where the self-improvement was partly instrumental. Maybe object level work and improvement will be mixed together for a long while. But by the last week of your life, it is unlikely that you should be building much infrastructure for yourself. Yet if self-improvement were no longer a dominant concern, then a self-improvement based community could not help you, and you would be of little interest to the community.

Furthermore, in a community centered around particular styles of growth—such as having deep insights about one’s soul—there is an even more plausible risk that that will cease to be the most effective route to becoming strong, and again you will lose your community if your path of improvement were to take you through a place like that.

Relatedly, in a community where much connection derives from people offering each other a particular kind of help, there is a risk that you will learn to help yourself in that way, severing the flow of other social benefits, unless you are somehow hampered. One kind of help this is particularly likely for is ‘easy help’. If you learn to solve the easy problems, fewer people can help you.

This should all be a disincentive to improving. Or to interpreting your current state as being good enough to be getting on with saving the world, for instance. In the same way as one might be tempted to interpret oneself as weak to be helped by one’s partner, one might be tempted to interpret oneself as hampered by deep and fascinating psychological maladaptations to be able to bond over them with other self-improvement fanatics.

Is this a problem for self-improvement based communities you are familiar with? Are the most popular or interesting people in these communities the ones who get on with object level work in an efficient and psychologically uneventful fashion? Or the people who have breakthroughs and trajectory changes,  and try new things, and find new ways of seeing themselves and others? Do you really look forward to the day when your Hamming problem is ‘I need to find a more efficient toothbrush’?

Non-existent markets in everything: anecdotes

Careful scientific study is widely considered a better basis for one’s views than anecdote. But anecdotes are more fun, so everyone reads about them anyway. Another virtue of anecdote is its low price. To verify scientifically that vitamin E is good for you might take millions of dollars or literally be impossible. To verify anecdotally that your friend credits vitamin E with some kind of impressive life-altering success requires nothing but a moderate number of  epistemically sloppy friends who like to try things. Also some time talking to them. Let’s say a few hundred dollars.

But some free anecdote about your friend’s nutritional revelations is not going to get you far. The really good anecdotes are harder to come by. Here’s a good anecdote:

Bird and Layzell used genetic algorithms to evolve a design for an oscillator, and found that one of the solutions involved repurposing the printed circuit board tracks on the system’s motherboard as a radio, to pick up oscillating signals generated by nearby personal computers.

It vividly makes exactly the point the authors want: AI agents might technically satisfy the tasks they are given, while not doing what their creators expected. It gives the reader a concrete instance of something that might have otherwise seemed unrealistically tied up with control of mythical genies. I have seen this single anecdote used valuably multiple times.

This anecdote was both more expensive than an anecdote about your friend’s success with vitamin E. However it could probably have been much cheaper than a scientific study. It was actually scientific research, but note that the research doesn’t have to be nearly as rigorous  or complete for the purpose of making an anecdote about it.

Bird and Layzell were not trying to make an anecdote at all (as far as I know). They were trying to make genetic algorithms that could produce oscillator designs. It was by pure luck that they produced this useful byproduct.

In a different world, the people doing this study would be the ones who knew there was demand for a nice vivid illustration of this particular idea. They would have thought about what would best paint that picture, and thought of this experiment among a few other things. They would carry them out, looking for one which nicely instantiated the desired idea. The next day they might go out and look for a particularly tragic case where someone waited too long because they were afraid, or bridge that collapsed because an engineer made a particular error.

Could we have a market for anecdotes? I claim anecdotes could be created intentionally, separately from the rest of writing, because they are often modular – so might be outsourced – and are used in multiple pieces of writing – so are especially valuable. They are more valuable again because they are bits that people frequently remember from writing. And arguably changing the image a person associates with an idea can have an especially powerful effect on their view of it. Anecdotes are also especially good for outsourcing if thinking good abstract thoughts is not that well correlated with knowing the best stories to illustrate them.

Do we already have such a market? Not that I know of. If we do, please tell me; I want to buy some anecdotes. If not, why not?

Mistakes #4: breaking Chesterton’s fence in the presence of bull

(Mistakes #1, #2, #3)

Lots of prominent activities don’t immediately make pragmatic sense. I mean, they make sense in the sense that you want to do them, but not in the sense that you can give an explicit account of why you want to do them.

For instance, visiting family in holidays. For one thing, why do we even have holidays? For a few days, and not on other days? And why eat turkeys or champagne then, and not eat them the rest of the time? For another thing, why do we have families? And why see your family exactly that week? For those who like their families, is that the most convenient week? Or do people really need to coordinate in holding family dinner at the same time as people not in their families? For those who don’t like their families, why go to so much trouble to see people whose only special feature is that you were already forced to spend too much time with them decades ago? Also, why guess what someone else wants you to buy for them, while they spend the same amount on a guess for you? And why does everything have to be decorated in colors and sparklinesses that nobody during more sober times finds aesthetically pleasing?

As a young person, learning about the world, you can respond to this sort of thing in at least two ways. One option is to go along with the things. Perhaps you have great trust in society. Perhaps you don’t notice. Perhaps you have little curiosity or passion for improving the world. Perhaps you have heard of Chesterton’s fence.

Another option is to politely disregard any of the things that don’t make apparent sense, and redesign your life according to reason. Or rudely disregard any of the things that don’t make sense, if you don’t see the use in politeness! If you see no reason for eating dinner before dessert, or exchanging gifts, or doing well in school, then you just don’t do those things.

I have a soft spot for the social innovator, who sees the people needlessly toiling for senseless or forgotten goals and is willing to face social censure to make a better world. However I think young, intelligent people often make mistakes this way. It’s not clear to me that these mistakes can be avoided without giving up a valuable attitude, but my guess is that some can.

There are at least three closely related mistakes: breaking down Chesterton’s fence because you don’t know why it’s there; breaking down Chesterton’s fence because the owner doesn’t know why it is there; and breaking down Chesterton’s fence because the owner thinks it is there for reasons you know to be nonsense.

Chesterton’s fence

Chesterton’s fence is from a famous principle, which basically says ‘don’t take down fences unless you know why they were put up’. Or relatedly, don’t try to reform society while you don’t understand the reasons for its present behaviors. Most fences are put up not by crazy people, but by people who had some sensical motive. Even if you can’t see a bull, the fact that there is a fence there is suggestive.

Suppose you don’t understand why people go out for coffee instead of caramels. And suppose that you have an important date to organize which is likely to succeed if you don’t mess it up. One thing you might say is, ‘well, people go out for coffee a LOT, but I don’t see why coffee is especially good for dates, and caramels are cheaper and more delicious’. Then you might opt for the caramel date. This would be knocking down Chesterton’s fence.

When younger, I didn’t intellectually understand why people would shave their legs, or maintain their garden, or try at sports at school, or listen to their feelings if they weren’t obviously well-aligned, or get a job, or keep in much touch with their family once they had left home, or live in a building, or drink alcohol. So I either didn’t do these things, or assumed I would not later on, and so did not prepare for them. Even though I knew they were popular activities among humans. They aren’t obviously good activities, even now, but the point is that I didn’t really consider whether people have them for a reason that I didn’t understand. I quickly disregarded them because I didn’t understand them. And in fact I think I was often wrong to think they were worthless.

The basic mistake with Chesterton’s fence is either not taking the existence of a thing as evidence that someone has a reason for it, or inferring too little from that about whether it is good on your values.

Chesterton’s fence after contacting the caretaker

A more subtle mistake involves inquiring after people’s reasons for the inexplicable fences they protect, finding that can report none, then hastily trashing the fences. For instance, you ask your mother why it is important for you to have table manners, and she says ‘it just is’, so you stop doing it.

The problem with this is that people do lots of relatively useful things without having much explicit understanding of what they are doing or why. People pick most behaviors up from other people.

Sometimes someone once understood why the behavior was good, and intentionally constructed it for others to use. Like CBT or the Alexander Technique. Sometimes the thing wasn’t spread because someone understood it, but still it was experimentally checked, at least informally. Like drinking lemon and honey when you are sick, or making eye contact the right amount. Other times perhaps nobody ever understood or had the opportunity to check well for efficacy, but still social forces preferentially select for things that work, at least to some extent, for some goals. Arguably like religion, or being idealistic as a young person, or following your curiosity.

Even if never in history had anyone ever explicitly understood why table manners were important, their prevalence is decent evidence that you shouldn’t immediately abandon them.

Chesterton’s fence with a stupid sign

After realizing this potential mistake, there is another mistake you can make however. That is where you ask people the reason for a thing, and they give you a reason, and it’s a bad one—and then you’re allowed to break down the fence, right? If you know it was put there by some crazy person to keep the flying pigs at bay?

For instance, suppose you ask your math teacher why math is important, and she says that as an adult you will need to be able to add and subtract numbers when you are shopping or if you have a job in administration or science. And you know that people can have calculators, so adding skills are not really important. And you don’t care much about grocery price comparisons anyway, since your value of time is too high. And basically none of what you are doing is adding or subtracting anyway! It’s all trigonometry and calculus. Then you might reasonably infer that math is of no use to you. If this person who appears to be very invested in math—whose whole life is about encouraging you to do math—can’t come up with anything better than that, then surely you are safe to ignore math. (Loosely based on a true story).

The problem here is that often instead of doing reasonable things while being clueless about why (as discussed in the last section), people come up with reasons for the things they are doing, which are unrelated to the process that caused them to be doing the thing.

Perhaps the builder of the fence did it because something scares his dogs when they go up the hill. Whatever it is, he doesn’t want it coming down to his house. He has a correct intuition that a fence there would make him feel safer. He comes up with a story that the problem is flying pigs, which are fortunately respectful of fences. This is pretty much irrelevant. In this case the fence does indicate something you should watch out for.

The math teacher is teaching math because other forces recognized the value of math, recognized that she was good at both high school math and at teaching, and paid her enough to teach it. She doesn’t know about the details of how this came about, or why it was considered a good idea. However sometimes students ask her why they should do math. She ponders this, and reasonably thinks about when math comes up in her own life. She comes up with calculating things. She might also think of ‘being a math teacher’—an equally thrilling inducement to the ambitious student. Though the math teacher is in charge of the fence, she doesn’t automatically know why it is there. It feels like she should, which may compel her to search for reasons, but the reasons have had somewhere between seconds and hours of thought applied to them, while the behavior itself underwent a more thorough optimization process.


This makes things hard, because when are you allowed to just write off things as not valuable? If someone looks you straight in the eye and says ‘I’m clashing these pots together because aliens’, do you have to reserve substantial credence that they are doing it for a good reason?

I think the main point is that in these cases you need to have actual beliefs about what is going on, and decide based on the apparent situation. You don’t get to just blanket disregard things because you don’t understand them, because its proponents don’t understand them, or because its proponents claim to do them for bad reasons. But you can update somewhat toward thinking the things are useless, and perhaps a lot.

An important part of figuring out when you should disregard inexplicable behaviors is understanding where the behavior came from. If your pot-clashing friend feels compelled to clash pots because it makes him feel less anxious about the aliens, then probably it is worth it for him, for entirely non-alien reasons. There is even some chance that it would also be useful in relieving your own anxieties about other things. If your friend clashes pots because he read online that this is the anti-alien-club sanctioned way to fend off aliens, then there are fewer models on which clashing pots is advantageous for either of you.

Sometimes understanding where behaviors come from will tell you that a behavior is well-honed, but not for goals you have, so this understanding allows you to break down fences you might have otherwise respected. For instance, if you are a teetotaler, and your best guess is that the excitement about festival X is closely related to its cheap and delicious booze, then you can infer that the festival is an unusually poor fit for you, even if nobody can explain well to you why they love it.


In general, I propose acting like a person who takes seriously the possibility that there is a bull in the paddock, rather than someone who is obliged to do a checking ritual before they are allowed to gleefully smash down the fence. Then follow the usual epistemological procedures for determining what statements are true and what actions are good.

Misunderstandings of not understanding

There are probably lots of attitudes to a thing we might call ‘understanding’ it. Here are examples of two:

“I understand combustion”

“I understand her feelings”

In the first statement, the person has a good explicit model of combustion, and the range of ways it can happen, and what causes it, and that sort of thing. In the second, her feelings feel natural and comprehensible to the speaker. T might have an intuitive sense for how her feelings will evolve, but they probably don’t have an explicit model of the psychology, biology or game theoretic reasons behind her feelings.

These different instances of ‘understanding’ probably vary on a few different dimensions that could be separated, but this is going to be a short blog post.

One thing that separates these kinds of understanding is their objects. People rarely say “I understand combustion” to mean that it feels intuitively natural to them, in the absence of any explicit model. And they rarely say “I understand her feelings” to mean that can answer questions about her alien neural chemistry.

There are some things where both kinds of understanding sometimes apply though. For instance “I don’t understand art” can mean “when I look at art, it doesn’t feel especially natural or good to me” or it can mean “I do not know what caused humans to like art so much”. The seconds speaker may love art.

I think failure to distinguish these kinds of understanding causes misunderstanding. For instance, suppose you say you don’t understand why people go out to bars. This could mean you don’t enjoy doing so personally, or that you don’t see why bars in particular evolved to be a big thing while public swimming pools full of tiny balls did not. If the two aren’t really distinguished in people’s minds, then they will suppose you are saying both, even once your elaboration strongly implies the latter.

Perhaps this is a reasonable shorthand, because in practice the only people who wonder why humans gather at bars in particular are those who aren’t enjoying bars enough. But I think it confuses things (for instance, impressions of how fun I am).

A particularly costly kind of misunderstanding is when people themselves assume they are saying both, because they think there is just one thing. Then for instance they notice that they don’t know why art exists, and don’t think to independently check whether they like art. Or they don’t understand why sports is in the curriculum, so they assume that they don’t enjoy it. Or ‘tradition’ seems an incomprehensible justification for a behavior, so they don’t pay attention to whether they personally get anything out of traditions.

I don’t really know how much any of this is a thing—I feel like I have seen quite a few examples of it, but I don’t remember many. What do you think?

(Related: Meta-error: I like therefore I am)