Too late for the easy solutions

The other day I took a day off, to do whatever I felt like. One appealing idea early in the day was to solve a major problem that has been bothering me for as long as I remember. I won’t go into details, but it seemed like maybe if I just sat down and tried to solve it, I might.

Then I realized that I didn’t really feel like solving it. If I could just sit down and solve it, then why hadn’t I already solved it? What would I say happened, even to myself? Either I had so far failed to do an easy thing—at great cost— or a legitimately hard problem had been magically and unintelligibly solved. Either I would be a bad person, or reality would not make sense. Neither of these seemed appealing at all. Better to suffer a little longer, and solve the problem some hard way, without blame or confusion.

When I noticed this, I realized it was actually a familiar pattern. It would be disconcerting to just casually decide to not have the problem any more, without some more meaningful banishment ritual.

This relates to forgetting to solve problems because you assume that you have tried. Instead of assuming that you have tried because you are reasonable, you implicitly assume you have tried because you are too scared of learning that you didn’t try or that your trying was somehow wrong.

This is related to the general mistake where a task feels like it is meant to take a long time, so you dutifully spend a long time on it. For instance, you think you might be curious about this philosophical problem, and so you talk to your advisor about your curiosity, and arrange to get some funding to pursue it for a little while, and make a schedule, and think about how to go about thinking about it, and try to go to more conferences on the topic. When, if you had just immediately sat down and tried to answer the question, you might have just answered it. This is something that ‘rationality techniques’ such as setting a five minute timer for solving a problem, or trying to solve a problem assuming it is really easy, are meant to eradicate.

It also reminds me a bit of people who sacrifice a lot for some goal, and then are faced with the prospect of everyone achieving the same ends without sacrifice, and are reluctant to accept that fate.

I’m not sure what to do about this, but for now I’m trying out just not making this error, without any fancy debugging.

One misuse of the efficient Katja hypothesis

Sometimes I have bad, longstanding problems. Sometimes other people know about them, but probably assume I have done everything I can to solve them, because I’m a reasonable person. So they don’t try to help me to solve them, because the problems are presumably incredibly intractable. They especially don’t try to help me solve them by checking I’ve tried easy things, because that would be rude. Reasonable people don’t go around having terrible problems that are easy to solve. 

Unfortunately, sometimes I do the same thing. I know I have some substantial problem, and since it has been there forever, I assume I’ve tried pretty hard to solve it, because I’m a reasonable person. I don’t explicitly think this, or the argument would look suspicious to me. But if there’s just some terrible thing that is a permanent fixture of my life, presumably that’s because it’s pretty damn hard to fix. 

Sometimes though, past-Katja was assuming something similar. Sometimes a problem has been around forever from the first time a person even notices it. You won’t necessarily notice yourself truly noticing it for the first time, and then rapidly take concrete actions to resolve it. It is already familiar. Or perhaps you have had it since before you were reasonable at all, but at every point you implicitly assumed it was beyond your capabilities, since apparently it was yesterday. 

At any rate, sometimes I end up with longstanding problems that I have never actually tried that hard to solve. And unless I actually stare at them and ask myself what it was I did to try to solve them, I don’t notice. This is similar to the error I labeled ‘I like therefore I am’, in which approving of a virtue is mistaken for having it. Here disapproving of a problem is mistaken for fighting it. I am not sure if this is a pattern for other people. If so, I suggest actually trying to solve it.

Mental minimalism


It is nice to work in other people’s spaces relative to mine, because I am not supposed to interact with their stuff. Having fewer affordances reduces mental noise, in the same way that minimalism does. You can have something like the mental effects of minimalism even in a pigsty, just by firmly refusing to believe that pigs can be interacted with. Objects become scenery.

This suggests that instead of picking up my things, I could just make a solemn promise that I will not pick up my things. I am yet to run the experiments.

If this hypothesis is true, it might have implications beyond optimal tidying schedules. For many things in the world, I wish I felt more affordances. I wish they didn’t feel like wallpaper. Also, the world seems extremely noisy and cluttered. Maybe these things are related. Maybe if you can only cope with so much noise and clutter, then you have to demote affordances, to make the world livable. In that case, maybe there are other ways to increase clutter-tolerance, that would allow more affordances.

Wanting the destination in a journey-based community

In a relationship centered around helping one another improve, there is a risk that after both have improved, one person becomes unable to be helped by the other.

Similarly, in a community centered around helping one another improve, there is a risk of succeeding enough that self-improvement is no longer a dominant concern in one’s life. In fact, hopefully everyone will at some point be sufficiently improved that it is time to get on to the actual project they were improving for. At least on a model where the self-improvement was partly instrumental. Maybe object level work and improvement will be mixed together for a long while. But by the last week of your life, it is unlikely that you should be building much infrastructure for yourself. Yet if self-improvement were no longer a dominant concern, then a self-improvement based community could not help you, and you would be of little interest to the community.

Furthermore, in a community centered around particular styles of growth—such as having deep insights about one’s soul—there is an even more plausible risk that that will cease to be the most effective route to becoming strong, and again you will lose your community if your path of improvement were to take you through a place like that.

Relatedly, in a community where much connection derives from people offering each other a particular kind of help, there is a risk that you will learn to help yourself in that way, severing the flow of other social benefits, unless you are somehow hampered. One kind of help this is particularly likely for is ‘easy help’. If you learn to solve the easy problems, fewer people can help you.

This should all be a disincentive to improving. Or to interpreting your current state as being good enough to be getting on with saving the world, for instance. In the same way as one might be tempted to interpret oneself as weak to be helped by one’s partner, one might be tempted to interpret oneself as hampered by deep and fascinating psychological maladaptations to be able to bond over them with other self-improvement fanatics.

Is this a problem for self-improvement based communities you are familiar with? Are the most popular or interesting people in these communities the ones who get on with object level work in an efficient and psychologically uneventful fashion? Or the people who have breakthroughs and trajectory changes,  and try new things, and find new ways of seeing themselves and others? Do you really look forward to the day when your Hamming problem is ‘I need to find a more efficient toothbrush’?

Non-existent markets in everything: anecdotes

Careful scientific study is widely considered a better basis for one’s views than anecdote. But anecdotes are more fun, so everyone reads about them anyway. Another virtue of anecdote is its low price. To verify scientifically that vitamin E is good for you might take millions of dollars or literally be impossible. To verify anecdotally that your friend credits vitamin E with some kind of impressive life-altering success requires nothing but a moderate number of  epistemically sloppy friends who like to try things. Also some time talking to them. Let’s say a few hundred dollars.

But some free anecdote about your friend’s nutritional revelations is not going to get you far. The really good anecdotes are harder to come by. Here’s a good anecdote:

Bird and Layzell used genetic algorithms to evolve a design for an oscillator, and found that one of the solutions involved repurposing the printed circuit board tracks on the system’s motherboard as a radio, to pick up oscillating signals generated by nearby personal computers.

It vividly makes exactly the point the authors want: AI agents might technically satisfy the tasks they are given, while not doing what their creators expected. It gives the reader a concrete instance of something that might have otherwise seemed unrealistically tied up with control of mythical genies. I have seen this single anecdote used valuably multiple times.

This anecdote was both more expensive than an anecdote about your friend’s success with vitamin E. However it could probably have been much cheaper than a scientific study. It was actually scientific research, but note that the research doesn’t have to be nearly as rigorous  or complete for the purpose of making an anecdote about it.

Bird and Layzell were not trying to make an anecdote at all (as far as I know). They were trying to make genetic algorithms that could produce oscillator designs. It was by pure luck that they produced this useful byproduct.

In a different world, the people doing this study would be the ones who knew there was demand for a nice vivid illustration of this particular idea. They would have thought about what would best paint that picture, and thought of this experiment among a few other things. They would carry them out, looking for one which nicely instantiated the desired idea. The next day they might go out and look for a particularly tragic case where someone waited too long because they were afraid, or bridge that collapsed because an engineer made a particular error.

Could we have a market for anecdotes? I claim anecdotes could be created intentionally, separately from the rest of writing, because they are often modular – so might be outsourced – and are used in multiple pieces of writing – so are especially valuable. They are more valuable again because they are bits that people frequently remember from writing. And arguably changing the image a person associates with an idea can have an especially powerful effect on their view of it. Anecdotes are also especially good for outsourcing if thinking good abstract thoughts is not that well correlated with knowing the best stories to illustrate them.

Do we already have such a market? Not that I know of. If we do, please tell me; I want to buy some anecdotes. If not, why not?