Eliezer suggests that increased economic growth is likely bad for the world, as it should speed up AI progress relative to work on AI safety. He reasons that this should happen because safety work is probably more dependent on insights building upon one another than AI work is in general. Thus work on safety should parallelize less well than work on AI, so should be at a disadvantage in a faster paced economy. Also, unfriendly AI should benefit more from brute computational power than friendly AI. He explains,
“Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done. “…
“Roughly, it seems to me like higher economic growth speeds up time and this is not a good thing. I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035.”
I’m sympathetic to others‘ criticisms of this argument, but would like to point out a more basic problem, granting all other assumptions. As far as I can tell, the effect of economic growth on parallelization should go the other way. Economic progress should make work in a given area less parallel, relatively helping those projects that do not parallelize well.
Economic growth, without substantial population growth, means that each person is doing more work in their life. This means the work that would have otherwise been done by a number of people can be done by a single person, in sequence. The number of AI researchers at a given time shouldn’t obviously change much if the economy overall is more productive. But each AI researcher will have effectively lived and worked for longer, before they are replaced by a different person starting off again ignorant. If you think research is better done by a small number of people working for a long time than a lot of people doing a little bit each, economic growth seems like a good thing.
On this view, economic growth is not like speeding up time – it is like speeding up how fast you can do things, which is like slowing down time. Robotic cars and more efficient coffee lids alike mean researchers (and everyone else) have more hours per day to do things other than navigate traffic and lid their coffee. I expect economic growth seems like speeding up time if you imagine it speeding up others’ abilities to do things and forget it also speeds up yours. Or alternatively if you think it speeds up some things everyone does, without speeding up some important things, such as people’s abilities to think and prepare. But that seems not obviously true, and would anyway be another argument.
I find this hostile wife phenomenon pretty confusing for several reasons:
- Wanting other people dead is generally considered an evil character trait. Most people either don’t have such wishes or don’t admit to them. This is especially the case when the person you prefer dead is someone you are meant to be loyal to. Often this applies even if they are permanently unconscious. The ‘analogy’ between wanting someone dead and insisting they don’t get cryonics is too clear to be missed by anyone.
- People don’t usually seem to care much about abstract beliefs or what anybody is going to do in the distant future, except as far as these things imply character traits or group ties. If the fact your partner is likely to die at all in the distant future isn’t enough to scare you away, I can’t see how anything he might do after that can be disturbing.
- People tend to care a lot about romantic partners, compared to other things. They are often willing to change religion or overlook homocidal tendencies to be with the one they love. Romance is infamous for making people not care about whether the sky falls, or they catch deadly venerial diseases.
- The hostile wife phenomenon seems to be a mostly female thing, but doesn’t go with any especially strong female-specific characteristics I know of. Women overall don’t especially resist medical spending for instance, and are often criticized as a group for enjoying otherwise pointless expenditures too much.
- My surprise is probably somewhat exacerbated by pre-existing surprise over many people wanting to die ever, but that is another issue.
Partial explanations of the hostile wife phenomenon offered by Darwin, de Wolf and de Wolf, Quentin, C (#44), Robin Hanson, and others:
- Women are more often hostile just because most interested parties are heterosexual men (This is presumably some part of the explanation, but not much – in around ninety cases of significant others interfering in cryonics arrangements recorded between 1978-86 I count four males, while roughly one quarter of Alcor’s members are women. It wouldn’t explain the strong opposition anyway, nor the fact that men are more interested in cryonics to begin with).
- Women really don’t like looking strange (according to Darwin and the de Wolfs, women often claim deep embarrassment. This just raises the question of why it’s so embarrassing. Plenty of people have plenty of strange opinions about all sorts of far off things, and they can usually devote some resources to them before it becomes so problematic for their partner).
- Cryonics looks like a one way ticket to somewhere else, where other women are, which also makes here and now less significant, shows the man could go on with life without the woman (this at least a cost in terms of something that usually matters in relationships. But why not go with him then? Why not divorce him over his excercize habits? Why wouldn’t men have similar concerns?)
- Cost, perhaps specifically that it is selfish not to give money to more needy, or to the wife or family (But people put up with huge expense on other funeral rituals and last minute attempts to live longer. Perhaps cryonics just looks less likely to do what it is meant to do? Would it be more admissible if it weren’t meant to do anything? Why would women care about this more than men?)
- Separation in whatever other afterlife the spouse has planned (this could only explain whatever proportion of religious people don’t believe you go to the same place eventually after living longer, and should apply to men also)
- Cryonics is seen as a substitute to caring about raising family, since you don’t need genetic immortality if you have proper immortality (if genetic immortality is a common conscious reason to invest in a family I’m not aware of it, and this shouldn’t especially apply to women)
- Wives object to their husband joining a boys’ club, and feel left out (this only makes sense for those heavily socially involved in a cryonics organization, and I understand this phenomenon is much broader)
- Thinking styles: women don’t like risky things, ‘global solutions’, or the sort of innovative thinking required to appreciate cryonics (this is Darwin, and the de Wolf’s main answer. It is made of controversial assumptions and wouldn’t explain strong antagonism anyway, just lack of enthusiasm. Even if you aren’t a fan of risk, it’s generally considered better that complete failure).
- Women either want to die, or have tenuously justified doing it, and resent being presented alternatives. This also explains why the answer to many of the above things is not to just sign up with your husband (But I see no evidence women want to die especially much, and while apparently many people have come to terms with their mortality enough to not fight it, I don’t think this is much higher among females than males)
None of these is satisfying. Got any more?
On the off chance the somewhat promising social disapproval hypothesis is correct, I warn any prospective hostile wives reading how deeply I disrespect them for preferring their husbands dead.
Tyler‘s criticism of cryonics, shared by others including me at times:
Why not save someone else’s life instead?
This applies to all consumption, so is hardly a criticism of cryonics, as people pointed out. Tyler elaborated that it just applies to expressive expenditures, which Robin pointed out still didn’t pick out cryonics over the the vast assortment of expressive expenditures that people (who think cryonics is selfish) are happy with. So why does cryonics instinctively seem particularly selfish?
I suspect the psychological reason cryonics stands out as selfish is that we rarely have the opportunity to selfishly splurge on something so far in the far reaches of far mode as cryonics, and far mode is the standard place to exercise our ethics.
Cryonics is about what will happen in a *long time* when you *die* to give you a *small chance* of waking up in a *socially distant* society in the *far future*, assuming you *widen your concept* of yourself to any *abstract pattern* like the one manifested in your biological brain and also that technology and social institutions *continue their current trends* and you don’t mind losing *peripheral features* such as your body (not to mention cryonics is *cold* and seen to be the preserve of *rich* *weirdos*).
You’re not meant to be selfish in far mode! Freeze a fair princess you are truly in love with or something. Far mode livens our passion for moral causes and abstract values. If Robin is right, this is because it’s safe to be ethical about things that won’t affect you yet it still sends signals to those around you about your personality. It’s a truly mean person who won’t even claim someone else a long way away should have been nice fifty years ago. So when technology brings the potential for far things to affect us more, we mostly don’t have the built in selfishness required to zealously chase the offerings.
This theory predicts that other personal expenditures on far mode items will also seem unusually selfish. Here are some examples of psychologically distant personal expenditures to test this:
- space tourism
- donating to/working on life extension because you want to live forever
- traveling in far away socially distant countries without claiming you are doing it to benefit or respect the locals somehow
- astronomy for personal gain
- buying naming rights to stars
- lottery tickets
- maintaining personal collections of historical artifacts
- building statues of yourself to last long after you do
- recording your life so future people can appreciate you
- leaving money in your will to do something non-altruistic
- voting for the party that will benefit you most
- supporting international policies to benefit your country over others
I’m not sure how selfish these seem compared to other non-altruistic purchases. Many require a lot of money, which makes anything seem selfish I suspect. What do you think?
If this theory is correct, does it mean cryonics is unfairly slighted because of a silly quirk of psychology? No. Your desire to be ethical about far away things is not obviously less real or legitimate than your desire to be selfish about near things, assuming you act on it. If psychological distance really is morally relevant to people, it’s consistent to think cryonics too selfish and most other expenditures not. If you don’t want psychological distance to be morally relevant then you have an inconsistency to resolve, but how you should resolve it isn’t immediately obvious. I suspect however that as soon as you discard cryonics as too selfish you will get out of far mode and use that money on something just as useless to other people and worth less to yourself, but in the realm more fitting for selfishness. If so, you lose out on a better selfish deal for the sake of not having to think about altruism. That’s not altruistic, it’s worse than selfishness.
Robin has claimed a few times that law is humans’ best bet for protecting ourselves from super-intelligent robots. This seemed unlikely to me, and he didn’t offer much explanation. I figured laws would protect us while AI was about as intellectually weak as us, but not if when it was far more powerful. I’ve changed my mind somewhat though, so let me explain.
When is it efficient to kill humans?
At first glance, it looks like creatures with the power to take humans’ property would do so if the value of the property minus the cost of stealing it was greater than the value of anything the human might produce with it. When AI is so cheap and efficient that the human will be replaced immediately, and the replacement will use resources enough better to make up for the costs of stealing and replacement, the human is better dead. This might be soon after humans are overtaken. However such reasoning is really imagining one powerful AI’s dealings with one person, then assuming that generalizes to many of each. Does it?
What does law do?
In a group of agents where none is more powerful than the rest combined, and there is no law, basically the strongest coalition of agents gets to do what they want, including stealing others’ property. There is an ongoing cost of conflict, so overall the group would do better if they could avoid this situation, but those with power at a given time benefits from stealing, so it goes on. Law basically lets everyone escape the dynamic of groups dominating one another (or some of it) by everyone in a very large group pre-committing to take the side of whoever is being dominated in smaller conflicts. Now wherever the strong try to dominate the weak, the super-strong awaits to crush the strong. Continue reading
With social networking sites enabling the romantically inclined to find out more about a potential lover before the first superficial chat than they previously would have in the first month of dating, this is an important question for the future of romance.
Lets assume that in looking for partners, people care somewhat about rank and and somewhat about match. That is, they want someone ‘good enough’ for them who also has interests and personality that they like.
First look at the rank component alone. Assume for a moment that people are happy to date anyone they believe is equal to or better than them in desirability. Then if everyone has a unique rank and perfect information, there will never be any dating at all. The less information they have the more errors in comparing, so the more chance that A will think B is above her while B thinks A is above him. Even if people are willing to date people somewhat less desirable than they, the same holds – by making more errors you trade wanting more desirable people for wanting less desirable people, who are more likely to want you back , even if they are making their own errors. So to the extent that people care about rank, more information means fewer hookups.
How about match then? Here it matters exactly what people want in a match. If they mostly care about their beloved having certain characteristics, more information will let everyone hear about more people who meet their requirements. On the other hand if we mainly want to avoid people with certain characteristics, more information will strike more people off the list. We might also care about an overall average desirability of characteristics – then more information is as likely to help or harm assuming the average person is averagely desirable. Or perhaps we want some minimal level of commonality, in which case more information is always a good thing – it wouldn’t matter if you find out she is a cannibalistic alcoholic prostitute, as long as eventually you discover those board games you both like. There are more possibilities.
You may argue that you will get all the information you want in the end, the question is only speed – the hookups prevented by everyone knowing more initially are those that would have failed later anyway. However flaws that dissuade you from approaching one person with a barge pole are often ‘endearing’ when you discover them too late, and once they are in place loving delusions can hide or remove attention from more flaws, so the rate of information discovery matters. To the extent we care about rank then, more information should mean fewer relationships. To the extent we care about match, it’s unclear without knowing more about what we want.