Tag Archives: future

Economic growth and parallelization of work

Eliezer suggests that increased economic growth is likely bad for the world, as it should speed up AI progress relative to work on AI safety. He reasons that this should happen because safety work is probably more dependent on insights building upon one another than AI work is in general. Thus work on safety should parallelize less well than work on AI, so should be at a disadvantage in a faster paced economy. Also, unfriendly AI should benefit more from brute computational power than friendly AI. He explains,

“Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done. “…

“Roughly, it seems to me like higher economic growth speeds up time and this is not a good thing. I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035.”

I’m sympathetic to otherscriticisms of this argument, but would like to point out a more basic problem, granting all other assumptions. As far as I can tell, the effect of economic growth on parallelization should go the other way. Economic progress should make work in a given area less parallel, relatively helping those projects that do not parallelize well.

Economic growth, without substantial population growth, means that each person is doing more work in their life. This means the work that would have otherwise been done by a number of people can be done by a single person, in sequence. The number of AI researchers at a given time shouldn’t obviously change much if the economy overall is more productive. But each AI researcher will have effectively lived and worked for longer, before they are replaced by a different person starting off again ignorant. If you think research is better done by a small number of people working for a long time than a lot of people doing a little bit each, economic growth seems like a good thing.

On this view, economic growth is not like speeding up time – it is like speeding up how fast you can do things, which is like slowing down time. Robotic cars and more efficient coffee lids alike mean researchers (and everyone else) have more hours per day to do things other than navigate traffic and lid their coffee. I expect economic growth seems like speeding up time if you imagine it speeding up others’ abilities to do things and forget it also speeds up yours. Or alternatively if you think it speeds up some things everyone does, without speeding up some important things, such as people’s abilities to think and prepare. But that seems not obviously true, and would anyway be another argument.

Why do ‘respectable’ women want dead husbands?

I find this hostile wife phenomenon pretty confusing for several reasons:

  1. Wanting other people dead is generally considered an evil character trait. Most people either don’t have such wishes or don’t admit to them. This is especially the case when the person you prefer dead is someone you are meant to be loyal to. Often this applies even if they are permanently unconscious. The ‘analogy’ between wanting someone dead and insisting they don’t get cryonics is too clear to be missed by anyone.
  2. People don’t usually seem to care much about abstract beliefs or what anybody is going to do in the distant future, except as far as these things imply character traits or group ties. If the fact your partner is likely to die at all in the distant future isn’t enough to scare you away, I can’t see how anything he might do after that can be disturbing.
  3. People tend to care a lot about romantic partners, compared to other things. They are often willing to change religion or overlook homocidal tendencies to be with the one they love. Romance is infamous for making people not care about whether the sky falls, or they catch deadly venerial diseases.
  4. The hostile wife phenomenon seems to be a mostly female thing, but doesn’t go with any especially strong female-specific characteristics I know of. Women overall don’t especially resist medical spending for instance, and are often criticized as a group for enjoying otherwise pointless expenditures too much.
  5. My surprise is probably somewhat exacerbated by pre-existing surprise over many people wanting to die ever, but that is another issue.

Partial explanations of the hostile wife phenomenon offered by Darwin, de Wolf and de Wolf, Quentin, C (#44), Robin Hanson, and others:

  • Women are more often hostile just because most interested parties are heterosexual men (This is presumably some part of the explanation, but not much – in around ninety cases of significant others interfering in cryonics arrangements recorded between 1978-86 I count four males, while roughly one quarter of Alcor’s members are women. It wouldn’t explain the strong opposition anyway, nor the fact that men are more interested in cryonics to begin with).
  • Women really don’t like looking strange (according to Darwin and the de Wolfs, women often claim deep embarrassment. This just raises the question of why it’s so embarrassing. Plenty of people have plenty of strange opinions about all sorts of far off things, and they can usually devote some resources to them before it becomes so problematic for their partner).
  • Cryonics looks like a one way ticket to somewhere else, where other women are, which also makes here and now less significant, shows the man could go on with life without the woman (this at least a cost in terms of something that usually matters in relationships. But why not go with him then? Why not divorce him over his excercize habits? Why wouldn’t men have similar concerns?)
  • Cost, perhaps specifically that it is selfish not to give money to more needy, or to the wife or family (But people put up with huge expense on other funeral rituals and last minute attempts to live longer. Perhaps cryonics just looks less likely to do what it is meant to do? Would it be more admissible if it weren’t meant to do anything? Why would women care about this more than men?)
  • Separation in whatever other afterlife the spouse has planned (this could only explain whatever proportion of religious people don’t believe you go to the same place eventually after living longer, and should apply to men also)
  • Cryonics is seen as a substitute to caring about raising family, since you don’t need genetic immortality if you have proper immortality (if genetic immortality is a common conscious reason to invest in a family I’m not aware of it, and this shouldn’t especially apply to women)
  • Wives object to their husband joining a boys’ club, and feel left out (this only makes sense for those heavily socially involved in a cryonics organization, and I understand this phenomenon is much broader)
  • Thinking styles: women don’t like risky things, ‘global solutions’, or the sort of innovative thinking required to appreciate cryonics (this is Darwin, and the de Wolf’s main answer. It is made of controversial assumptions and wouldn’t explain strong antagonism anyway, just lack of enthusiasm. Even if you aren’t a fan of risk, it’s generally considered better that complete failure).
  • Women either want to die, or have tenuously justified doing it, and resent being presented alternatives. This also explains why the answer to many of the above things is not to just sign up with your husband (But I see no evidence women want to die especially much, and while apparently many people have come to terms with their mortality enough to not fight it, I don’t think this is much higher among females than males)

None of these is satisfying. Got any more?

On the off chance the somewhat promising social disapproval hypothesis is correct, I warn any prospective hostile wives reading how deeply I disrespect them for preferring their husbands dead.

Is cryonicists’ selfishness distance induced?

Tyler‘s criticism of cryonics, shared by others including me at times:

Why not save someone else’s life instead?

This applies to all consumption, so is hardly a criticism of cryonics, as people pointed out. Tyler elaborated that it just applies to expressive expenditures, which Robin pointed out still didn’t pick out cryonics over the the vast assortment of expressive expenditures that people (who think cryonics is selfish) are happy with. So why does cryonics instinctively seem particularly selfish?

I suspect the psychological reason cryonics stands out as selfish is that we rarely have the opportunity to selfishly splurge on something so far in the far reaches of far mode as cryonics, and far mode is the standard place to exercise our ethics.

Cryonics is about what will happen in a *long time* when you *die*  to give you a *small chance* of waking up in a *socially distant* society in the *far future*, assuming you *widen your concept* of yourself to any *abstract pattern* like the one manifested in your biological brain and also that technology and social institutions *continue their current trends* and you don’t mind losing *peripheral features* such as your body (not to mention cryonics is *cold* and seen to be the preserve of *rich* *weirdos*).

You’re not meant to be selfish in far mode! Freeze a fair princess you are truly in love with or something.  Far mode livens our passion for moral causes and abstract values.  If Robin is right, this is because it’s safe to be ethical about things that won’t affect you yet it still sends signals to those around you about your personality. It’s a truly mean person who won’t even claim someone else a long way away should have been nice fifty years ago.  So when technology brings the potential for far things to affect us more, we mostly don’t have the built in selfishness required to zealously chase the offerings.

This theory predicts that other personal expenditures on far mode items will also seem unusually selfish. Here are some examples of psychologically distant personal expenditures to test this:

  • space tourism
  • donating to/working on life extension because you want to live forever
  • traveling in far away socially distant countries without claiming you are doing it to benefit or respect the locals somehow
  • astronomy for personal gain
  • buying naming rights to stars
  • lottery tickets
  • maintaining personal collections of historical artifacts
  • building statues of yourself to last long after you do
  • recording your life so future people can appreciate you
  • leaving money in your will to do something non-altruistic
  • voting for the party that will benefit you most
  • supporting international policies to benefit your country over others

I’m not sure how selfish these seem compared to other non-altruistic purchases. Many require a lot of money, which makes anything seem selfish I suspect. What do you think?

If this theory is correct, does it mean cryonics is unfairly slighted because of a silly quirk of psychology? No. Your desire to be ethical about far away things is not obviously less real or legitimate than your desire to be selfish about near things, assuming you act on it. If psychological distance really is morally relevant to people, it’s consistent to think cryonics too selfish and most other expenditures not. If you don’t want psychological distance to be morally relevant then you have an inconsistency to resolve, but how you should resolve it isn’t immediately obvious. I suspect however that as soon as you discard cryonics as too selfish you will get out of far mode and use that money on something just as useless to other people and worth less to yourself, but in the realm more fitting for selfishness. If so, you lose out on a better selfish deal for the sake of not having to think about altruism. That’s not altruistic, it’s worse than selfishness.

Might law save us from uncaring AI?

Robin has claimed a few times that law is humans’ best bet for protecting ourselves from super-intelligent robots. This seemed unlikely to me, and he didn’t offer much explanation. I figured laws would protect us while AI was about as intellectually weak as us, but not if when it was far more powerful. I’ve changed my mind somewhat though, so let me explain.

When is it efficient to kill humans?

At first glance, it looks like creatures with the power to take humans’ property would do so if the value of the property minus the cost of stealing it was greater than the value of anything the human might produce with it. When AI is so cheap and efficient that the human will be replaced immediately, and the replacement will use resources enough better to make up for the costs of stealing and replacement, the human is better dead. This might be soon after humans are overtaken. However such reasoning is really imagining one powerful AI’s dealings with one person, then assuming that generalizes to many of each. Does it?

What does law do?

In a group of agents where none is more powerful than the rest combined, and there is no law, basically the strongest coalition of agents gets to do what they want, including stealing others’ property. There is an ongoing cost of conflict, so overall the group would do better if they could avoid this situation, but those with power at a given time benefits from stealing, so it goes on. Law  basically lets everyone escape the dynamic of groups dominating one another (or some of it) by everyone in a very large group pre-committing to take the side of whoever is being dominated in smaller conflicts. Now wherever the strong try to dominate the weak, the super-strong awaits to crush the strong. Continue reading

How does information affect hookups?

With social networking sites enabling the romantically inclined to find out more about a potential lover before the first superficial chat than they previously would have in the first month of dating, this is an important question for the future of romance.

Lets assume that in looking for partners, people care somewhat about rank and and somewhat about match. That is, they want someone ‘good enough’ for them who also has interests and personality that they like.

First look at the rank component alone. Assume for a moment that people are happy to date anyone they believe is equal to or better than them in desirability. Then if everyone has a unique rank and perfect information, there will never be any dating at all. The less information they have the more errors in comparing, so the more chance that A will think B is above her while B thinks A is above him. Even if people are willing to date people somewhat less desirable than they, the same holds – by making more errors you trade wanting more desirable people for wanting less desirable people, who are more likely to want you back , even if they are making their own errors. So to the extent that people care about rank, more information means fewer hookups.

How about match then? Here it matters exactly what people want in a match. If they mostly care about their beloved having certain characteristics,  more information will let everyone hear about more people who meet their requirements. On the other hand if we mainly want to avoid people with certain characteristics, more information will strike more people off the list. We might also care about an overall average desirability of characteristics – then more information is as likely to help or harm assuming the average person is averagely desirable. Or perhaps we want some minimal level of commonality, in which case more information is always a good thing – it wouldn’t matter if you find out she is a cannibalistic alcoholic prostitute, as long as eventually you discover those board games you both like. There are more possibilities.

You may argue that you will get all the information you want in the end, the question is only speed – the hookups prevented by everyone knowing more initially are those that would have failed later anyway. However flaws that dissuade you from approaching one person with a barge pole are often ‘endearing’ when you discover them too late, and once they are in place loving delusions can hide or remove attention from more flaws, so the rate of information discovery matters. To the extent we care about rank then, more information should mean fewer relationships. To the extent we care about match, it’s unclear without knowing more about what we want.

SIA doomsday: The filter is ahead

The great filter, as described by Robin Hanson:

Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?

I will argue that we are not far along at all. Even if the steps of the filter we have already passed look about as hard as those ahead of us, most of the filter is probably ahead. Our bright future is an illusion; we await filtering. This is the implication of applying the self indication assumption (SIA) to the great filter scenario, so before I explain the argument, let me briefly explain SIA.

SIA says that if you are wondering which world you are in, rather than just wondering which world exists, you should update on your own existence by weighting possible worlds as more likely the more observers they contain. For instance if you were born of an experiment where the flip of a fair coin determined whether one (tails) or two (heads) people were created, and all you know is that and that you exist, SIA says heads was twice as likely as tails. This is contentious; many people think in such a situation you should think heads and tails equally likely. A popular result of SIA is that it perfectly protects us from the doomsday argument. So now I’ll show you we are doomed anyway with SIA.

Consider the diagrams below. The first one is just an example with one possible world so you can see clearly what all the boxes mean in the second diagram which compares worlds. In a possible world there are three planets and three stages of life. Each planet starts at the bottom and moves up, usually until it reaches the filter. This is where most of the planets become dead, signified by grey boxes. In the example diagram the filter is after our stage. The small number of planets and stages and the concentration of the filter is for simplicity; in reality the filter needn’t be only one unlikely step, and there are many planets and many phases of existence between dead matter and galaxy colonizing civilization. None of these things are important to this argument.

.

Diagram key

.

The second diagram shows three possible worlds where the filter is in different places. In every case one planet reaches the last stage in this model – this is to signify a small chance of reaching the last step, because we don’t see anyone out there, but have no reason to think it impossible. In the diagram, we are in the middle stage, earthbound technological civilization say. Assume the various places we think the filter could be are equally likely..

SIA doom

 

.

This is how to reason about your location using SIA:

  1. The three worlds begin equally likely.
  2. Update on your own existence using SIA by multiplying the likelihood of worlds by their their population. Now the likelihood ratio of the worlds is 3:5:7
  3. Update on knowing you are in the middle stage. New likelihood ratio: 1:1:3. Of course if we began with an accurate number of planets in each possible world, the 3 would be humungous and we would be much more likely in an unfiltered world.

Therefore we are much more likely to be in worlds where the filter is ahead than behind.

—-

Added: I wrote a thesis on this too.


Everyone else prefers laws to values

How do you tell what a superhuman AI's values are? ( picture: ittybittiesforyou - see bottom)

How do you tell what a superhuman AI's values are? ( picture: ittybittiesforyou - see bottom)

Robin Hanson says that it is more important to have laws than shared values. I agree with him when ‘shared values’ means that shared indexical values remain about different people, e.g. If you and I share a high value of orgasms, you value you having orgasms and I value me having orgasms. Unless we are dating it’s all the same to me if you prefer croquet to orgasms. I think the singularitarians aren’t talking about this though. They want to share values in such a way that AI wants them to have orgasms. In principle this would be far better than having different values and trading. Compare gains from trading with the world economy to gains from the world economy’s most heartfelt wish being to please you. However I think that laws will get far more attention than values overall in arranging for an agreeable robot transition, and rightly so. Let me explain, then show you how this is similar to some more familiar situations.

Greater intelligences are unpredictable

If you know exactly what a creature will do in any given situation before it does it, you are at least as smart as it (if we don’t include it’s physical power as intelligence). Greater intelligences are inherently unpredictable. If you know the intelligence is trying to do, then you know what kind of outcome to expect, but guessing how it will get there is harder. This should be less so for lesser intelligences, and more so for more different intelligences. I will have less trouble guessing what a ten year old will do in chess against me than a grand master, though I can guess the outcome in both cases. If I play someone with a significantly different way of thinking about the game they may also be hard to guess.

Unpredictability is dangerous

This unpredictability is a big part of the fear of a superhuman AI. If you don’t know what path an intelligence will take to the goal you set it, you don’t know whether it will affect other things that you care about. This problem is most vividly illustrated by the much discussed case where the AI in question is suddenly very many orders of magnitude smarter than a human. Imagine we initially gave it only a subset of our values, such as our yearning to figure out whether P = NP, and we assume that it won’t influence anything outside its box. It might determine that the easiest way to do this is to contact outside help, build powerful weapons, take more resources by force, and put them toward more computing power. Because we weren’t expecting it to consider this option, we haven’t told it about our other values that are relevant to this strategy, such as the popular penchant for being alive.

I don’t find this type of scenario likely, but others do, and the problem could arise at a lesser scale with weaker AI. It’s a bit like the problem that every genie owner in fiction has faced. There are two solutions. One is to inform the AI about all of human values, so it doesn’t matter how wide it’s influence is. The other is to restrict its actions. SIAI interest seems to be in giving the AI human values (whatever that means), then inevitably surrendering control to it. If the AI will inevitably likely be so much smarter than humans that it will control everything fovever almost immediately, I agree that values are probably the thing to focus on. But consider the case where AI improves fast but by increments, and no single agent becomes more powerful than all of human society for a long time.

Unpredictability also makes it hard to use values to protect from unpredictability

When trying to avoid the dangers of unpredictability, the same unpredictability causes another problem for using values as a means of control. If you don’t know what an entity will do with given values, it is hard to assess whether it actually has those values. It is much easier to assess whether it is following simpler rules. This seems likely to be the basis for human love of deontological ethics and laws. Utilitarians may get better results in principle, but from the perspective of anyone else it’s not obvious whether they are pushing you in front of a train for the greater good or specifically for the personal bad. You would have to do all the calculations yourself and trust their information. You also can’t rely on them to behave in any particular way so that you can plan around them, unless you make deals with them, which is basically paying them to follow rules, so is more evidence for my point.

‘We’ cannot make the AI’s values safe.

I expect the first of these things to be a particular problem with greater than human intelligences. It might be better in principle if an AI follows your values, but you have little way to tell whether it is. Nearly everyone must trust the judgement, goodness and competency of whoever created a given AI, be it a person or another AI. I suspect this gets overlooked somewhat because safety is thought of in terms of what to do when *we* are building the AI. This is the same problem people often have thinking about government. They underestimate the usefulness of transparency there because they think of the government as ‘we’. ‘We should redistribute wealth’ may seem unproblematic, whereas ‘I should allow an organization I barely know anything about to take my money on the vague understanding that they will do something good with it’ does not. For people to trust AIs the AIs should have simple enough promised behavior that people using them can verify that they are likely doing what they are meant to.

This problem gets worse the less predictable the agents are to you. Humans seem to naturally find rules more important for more powerful people and consequences more important for less powerful people. Our world also contains some greater than human intelligences already: organizations. They have similar problems to powerful AI. We ask them to do something like ‘cheaply make red paint’ and often eventually realize their clever ways to do this harm other values, such as our regard for clean water. The organization doesn’t care much about this because we’ve only paid it to follow one of our values while letting it go to work on bits of the world where we have other values. Organizations claim to have values, but who can tell if they follow them?

To control organizations we restrict them with laws. It’s hard enough to figure out whether a given company did or didn’t give proper toilet breaks to its employees. It’s virtually impossible to work out whether their decisions on toilet breaks are as close to optimal according some popularly agreed set of values.

It may seem this is because values are just harder to influence, but this is not obvious. Entities follow rules because of the incentives in place rather than because they are naturally inclined to respect simple constraints. We could similarly incentivise organizations to be utilitarian if we wanted. We just couldn’t assess whether they were doing it. Here we find rules more useful and values less for these greater than human intelligences than we do for humans.

We judge and trust friends and associates according to what we perceive to be their values. We drop a romantic partner because they don’t seem to love us enough even if they have fulfilled their romantic duties. But most of us will not be put off using a product because we think the company doesn’t have the right attitude, though we support harsh legal punishments for breaking rules. Entities just a bit superhuman are too hard to control with values.

You might point out here that values are not usually programmed specifically in organizations, whereas in AI they are. However this is not a huge difference from the perspective of everyone who didn’t program the AI. To the programmer, giving an AI all of human values may be the best method of avoiding assault on them. So if the first AI is tremendously powerful, so nobody but the programmer gets a look in, values may matter most. If the rest of humanity still has a say, as I think they will, rules will be more important.

How far can AI jump?

I went to the Singularity Summit recently, organized by the Singularity Institute for Artificial Intelligence (SIAI). SIAI’s main interest is in the prospect of a superintelligence quickly emerging and destroying everything we care about in the reachable universe. This concern has two components. One is that any AI above ‘human level’ will improve its intelligence further until it takes over the world from all other entities. The other is that when the intelligence that takes off is created it will accidentally have the wrong values, and because it is smart and thus very good at bringing about what it wants, it will destroy all that humans value. I disagree that either part is likely. Here I’ll summarize why I find the first part implausible, and there I discuss the second part.

The reason that an AI – or a group of them – is a contender for gaining existentially risky amounts of power is that it could trigger an intelligence explosion which happens so fast that everyone else is left behind. An intelligence explosion is a positive feedback where more intelligent creatures are better at improving their intelligence further.

Such a feedback seems likely. Even now as we gain more concepts and tools that allow us to think well we use them to make more such understanding. AIs fiddling with their architecture don’t seem fundamentally different. But feedback effects are easy to come by. The question is how big this feedback effect will become. Will it be big enough for one machine to permanently overtake the rest of the world economy in accumulating capability?

In order to grow more powerful than everyone else you need to get significantly ahead at some point. You can imagine this could happen either by having one big jump in progress or by having slightly more growth over a long period of time. Having slightly more growth over a long period is staggeringly unlikely to happen by chance, so it needs to share some cause too. Anything that will give you higher growth for long enough to take over the world is a pretty neat innovation, and for you to take over the world everyone else has to not have anything close. So again, this is a big jump in progress. So for AI to help a small group take over the world, it needs to be a big jump.

Notice that no jumps have been big enough before in human invention. Some species, such as humans, have mostly taken over the worlds of other species. The seeming reason for this is that there was virtually no sharing of the relevant information between species. In human society there is a lot of information sharing. This makes it hard for anyone to get far ahead of everyone else. While you can see there are barriers to insights passing between groups, such as incompatible approaches to a kind of technology by different people working on it, these have not so far caused anything like a gap allowing permanent separation of one group.

Another barrier to a big enough jump is that much human progress comes from the extra use of ideas that sharing information brings. You can imagine that if someone predicted writing they might think ‘whoever creates this will be able to have a superhuman memory and accumulate all the knowledge in the world and use it to make more knowledge until they are so knowledgeable they take over everything.’ If somebody created writing and kept it to themselves they would not accumulate nearly as much recorded knowledge as another person who shared a writing system. The same goes for most technology. At the extreme, if nobody shared information, each person would start out with less knowledge than a cave man, and would presumably end up with about that much still. Nothing invented would be improved on. Systems which are used tend to be improved on more. This means if a group hides their innovations and tries to use them alone to create more innovation, the project will probably not grow as fast as the rest of the economy together. Even if they still listen to what’s going on outside, and just keep their own innovations secret, a lot of improvement in technologies like software comes from use. Forgoing information sharing to protect your advantage will tend to slow down your growth.

Those were some barriers to an AI project causing a big enough jump. Are the reasons for it good enough to make up for them?

The main argument for an AI jump seems to be that human level AI is a powerful and amazing innovation that will cause a high growth rate. But this means it is a leap from what we have currently, not that it is especially likely to be arrived at in one leap. If we invented it tomorrow it would be a jump, but that’s just evidence that we won’t invent it tomorrow. You might argue here that however gradually it arrives, the AI will be around human level one day, and then the next it will suddenly be a superpower. There’s a jump from the growth after human level AI is reached, not before. But if it is arrived at incrementally then others are likely to be close in developing similar technology, unless it is a secret military project or something. Also an AI which recursively improves itself forever will probably be preceded by AIs which self improve to a lesser extent, so the field will be moving fast already. Why would the first try at an AI which can improve itself have infinite success? It’s true that if it were powerful enough it wouldn’t matter if others were close behind or if it took the first group a few goes to make it work. For instance if it only took a few days to become as productive as the rest of the world added together, the AI could probably prevent other research if it wanted. However I haven’t heard any good evidence it’s likely to happen that fast.

Another argument made for an AI project causing a big jump is that intelligence might be the sort of thing for which there is a single principle. Until you discover it you have nothing, and afterwards you can build the smartest thing ever in an afternoon and can just extend it indefinitely. Why would intelligence have such a principle? I haven’t heard any good reason. That we can imagine a simple, all powerful principle of controlling everything in the world isn’t evidence for it existing.

I agree human level AI will be a darn useful achievement and will probably change things a lot, but I’m not convinced that one AI or one group using it will take over the world, because there is no reason it will be a never before seen size jump from technology available before it.