AI: is research like board games or seeing?

‘The computer scientist Donald Knuth was struck that “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’ – that, somehow, is so much harder!”‘
- Nick Bostrom, Superintelligence, p14

There are some activities we think of as involving substantial thinking that we haven’t tried to automate much, presumably because they require some of the ‘not thinking’ skills as precursors. For instance, theorizing about the world, making up grand schemes, winning political struggles, and starting successful companies. If we had successfully automated the ‘without thinking’ tasks like vision and common sense, do you think these remaining kinds of thinking tasks would come easily to AI – like chess in a new domain – or be hard like the ‘without thinking’ tasks?

Sebastian Hagen points out that we haven’t automated math, programming, or debugging, and these seem much like research and don’t require complicated interfacing with the world at least.

Crossposted from Superintelligence Reading Group.

Discontinuous paths

In my understanding, technological progress almost always proceeds relatively smoothly (see algorithmic progress, the performance curves database, and this brief investigation). Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch.

Similarly, if an advanced civilization kept their nanotechnology locked up nearby, then our incremental progress in lock-picking tools might suddenly give rise to a huge leap in nanotechnology from our perspective, whereas earlier lock picking progress wouldn’t have given us any noticeable nanotechnology progress.

If this is an unusual situation however, it seems strange that the other most salient route to superintelligence – artificial intelligence designed by humans – is also often expected to involve a discontinuous jump in capability, but for entirely different reasons. Is there some unifying reason to expect jumps in both routes to superintelligence, or is it just coincidence? Or do I overstate the ubiquity of incremental progress?

Crossposted from my own comment on the Superintelligence reading group. Commenters encouraged to do it over there.

High ulterior motives

People with ulterior motives are often treated with suspicion and contempt. In a world driven round substantially by ulterior motives, this can lead to dispair, both for the ulteriorly motivated and the suspicious and contemptuous. How terrible to not be able to trust your friends, your brothers, yourself!

At this point it is worth noting two things:

  1. This kind of extreme concern about everyone being corrupt only seems to concern more philosophically minded people. This suggests that few practical problems arise from everyone having poor motives.
  2. In general, if you have a scorecard on which you always score zero, it is likely that you are not using the most useful scoring system. You shouldn’t neccessarily change your overall goal of doing better on that metric, but for now it might be convenient to differentiate the space within ‘zero’.

It seems to me that an important way in which ulterior motives vary is the extent to which they align with the non-ulterior motives you would like the person in question to have.

Suppose you would like to leave your small child with a babysitter, Sam. Unfortunately, you have learned that Sam is not motivated purely by the desire to care for your child. He has an ulterior motive for agreeing to babysit. How much does this trouble you?

  • If Sam runs a babysitting company, and really he just wants his babysitting company to thrive, then you should basically not be concerned at all.
  • If Sam just wants to try out babysitting once, to see what it’s like, you should be more concerned.
  • If Sam really just wants a chance to use your big-screen TV this one time you should be even more concerned.
  • If Sam just wants a chance to steal your baby so that he can sell it on the black market, you should be truly very worried.

Your worry in these cases tracks the extent to which Sam’s ulterior motives will cause him to do exactly what he would do if he just fundamentally wanted to care for your baby. If he wants his business to go well, he will do what you want him to do, to the extent that you can tell and are willing to pay. If he wants to try out babysitting, he will probably at least hang out with your child and do the basic babysitting motions. If he wants to use your TV, there’s not much reason he will do anything besides spend part of the evening in the same building as your child. If he wants to steal your child, his motives diverge from yours from the moment he arrives at your house.

I claim that in general, ulterior motives are more troubling to us – and should be – if they are less well aligned with the purported high motives. I suspect they also feel more ‘ulterior’ when they are less well aligned, both to the person who has them and to the observer.

Ulterior motives like ‘make money’ and ‘get respect’ tend to be relatively well aligned I think. If you are aiming to do task X, but really you just want respect, and your actions or success at X will be visible to someone who might give you respect, then you will act like a person who wants to do task X, down to (at least some) minor details.

Ulterior motives that are troubling tend not to be well aligned with purported motives. Either in the sense that the person will not do the thing they are purporting to care about, or often in the sense that they will do it, but simultaneously do something you don’t want.

For instance, suppose you give me a compliment, with the hope that I will then help you move house. Your overt motive is implicitly something like honestly communicating to me, while your ulterior motive is to get moving help. A compliment motivated by your ulterior motive will probably not also be honest communication with me, so your behavior hardly aligns with your overt motive at all. On top of that, your ulterior motive means you will try to cause me to help you move house, a random other thing I don’t want to happen which has nothing to do with your overt motive.

This is not the only axis on which ulterior motives are better or worse. A different kind of reason ulterior motives might be particularly bad is if it is the motives that matter to you, rather than the behavior. For instance, if a person is merely friends with you to get your money, regardless of how friendly this makes them, you may be dissatisfied.

I think it would be better if we distinguished ‘low ulterior motives’ – which involve hardly caring about the overt goal – from ‘high ulterior motives’ – which are closely aligned with the avert goal consistently across many circumstances. Some people (perhaps read ‘all people’) pretend that they want to do what is best for the world, when in fact they also strongly want to be respected and praised and so on. Some people want to steal your baby. Conflating the two does not seem great, terminologically or psychologically.

I’m not saying that we shouldn’t criticize motives like desire for respect or money (or that we should). I merely suggest that if we want to do those things, we criticize these motives on their own merits, rather than lumping them in with much more hazardous low motives and cheaply criticizing ‘ulterior motives’ in general. 


No ulterior motives

AI surprise research: high value?

If artificial intelligence was about to become ‘human-level’, do you think we (society) would get advance notice? Would artificial intelligence researchers have been talking about it for years? Would tell-tale precursor technologies have triggered the alert? Would it be completely unsurprising, because AI’s had been able to do almost everything that humans could do for a decade, and catching up at a steady pace?

Whether we would be surprised then seems to make a big difference to what we should do now. Suppose that there are things someone should do before human-level AI appears (a premise to most current efforts to mitigate AI impacts). If there will be a period in which many people anticipate human-level AI soon, then probably someone will do the highest priority things. If you try to do them now, you might replace them or just fail because it is hard to see what needs doing so far ahead of time. So if you think AI will not be surprising, then the best things to do regarding AI now will tend to be the high value things which require a longer lead time. This might include building better institutions and capabilities; shifting AI research trajectories; doing technical work that is hard to parallelize; or looking for ways to get the clear warning earlier.

By Anders Sandberg (http://www.aleph.se/andart/archives/2006/10/warning_signs_for_tomorrow.html)

Anders Sandberg has put some thought into warning signs for AI.

On the other hand, if the advent of human-level AI was very surprising, then only a small group of people will ever respond to the anticipation of human-level AI (including those who are already concerned about it). This makes it more likely that a person who anticipates human-level AI now – as a member of that small group – should work on the highest priority things that will need to be done about it ever. This might include object-level tools for controlling moderately powerful intelligences, or design of features that would lower any risks of those intelligences.

I have just argued that the best approach for dealing with concerns about human-level AI should be depend on how surprising we expect it to be. I also think there are relatively cheap ways to shed light on this question that (as far as I know) haven’t received much attention. For instance one could investigate:

  1. How well can practitioners in related areas usually predict upcoming developments? (especially for large developments, and for closely related fields)
  2. To what extent is progress in AI driven by conceptual progress, and to what extent is it driven by improvements in hardware?
  3. Do these happen in parallel for a given application, or e.g. does some level of hardware development prompt software development?
  4. Looking at other areas of technological development, what warnings have ever been visible of large otherwise surprising changes? What properties go along with surprising changes?
  5. What kinds of evidence of upcoming change motivate people to action, historically?
  6. What is the historical prevalence of discontinuous progress in analogous areas (e.g. technology in general, software, algorithms (I’ve investigated this a bit); very preliminary results suggest discontinuous progress is rare)

Whether brain emulations, brain-inspired AI, or more artificial AI come first is also relevant to this question, as are our expectations about the time until human-level AI appears. So investigations which shed light on those issues should also shed light on this one.

Several of the questions above might be substantially clarified with less than a person-year of effort. With that degree of sophistication I think they have a good chance of changing our best guess about the degree of warning to expect, and perhaps about what people concerned about AI risk should do now. Such a shift seem valuable.

I have claimed that the surprisingness of human-level AI makes quite a difference to what those concerned about AI risk should do now, and that learning more about this surprisingness is cheap and neglected. So it seems pretty plausible to me that investigating the likely surprisingness of human-level AI is a better deal at this point than acting on our current understanding.

I haven’t made a watertight case for the superiority of research into AI surprise, and don’t necessarily believe it. I have gestured at a case however. Do you think it is wrong? Do you know of work on these topics that I should know about?

happy AI day

Conversation with Paul Penley of Excellence in Giving

Cross posted from 80,000 Hours Blog. Part of the Cause Prioritization Shallow, all parts of which are available here. Previously in this series, conversations with Owen Cotton-Barratt and Paul Christiano.

 

Participants

  • Paul Penley: Director of Research, Excellence in Giving
  • Katja Grace: Research Assistant, Machine Intelligence Research Institute
  • Nick Beckstead: Research Fellow, Future of Humanity Institute; Board of Trustees, Center for Effective Altruism

Notes

This is a summary of Paul Penley’s points in a conversation on April 3, 2014, written by Katja with substantial help from the other participants.

What kind of philanthropic advising work does Excellence in Giving do?

Excellence in Giving is a philanthropic advisory firm with around seven staff members. Around 20 families retain them to act in the place of foundation staff, and other foundations consult with them on specific issues. Excellence in Giving provides an experienced staff who can share what they have learned serving clients for the past 12 years with new family foundations. They don’t manage money, but they do track giving, structure grants, research grant effectiveness, produce grant impact reports and plan experiences for clients to see and celebrate their giving’s impact.

The research department at Excellence in Giving evaluates nonprofit organizations, performs community solutions assessments and sets up outcomes measurement processes for grantees to ensure its clients support strategic, well-managed charities making a difference. Evaluations at the organization level are discussed online here. Such organizational evaluations have taken place at after school programs in Iowa, colleges in Oxford, children’s charities in Uganda and Kenya, and charitable trusts in north India.

Assessments of community needs and solutions tend to focus on a geographic location (e.g. Chicago), a population to be served (e.g. inner-city youths), and a focus area (e.g. early childhood development). They will identify problems among the population to be served that are related to the focus area, organizations that work on those problems, and solutions to those problems that have evidence of helping to alleviate the problems in question. This involves making tough judgement calls with data in hand from quantitative and qualitative research methods. Often an intervention is valuable, but enough work is already being done on it. This judgement call has to be made, but it can be hard for people working on an issue to hear that it’s not strategic to put more effort into a problem while the problem persists.

It is important in this process to distinguish the senses in which a problem seems bad. Sometimes a problem seems bad because a situation is suboptimal, though it may be rapidly getting better. This should be distinguished from something which seems bad because it is deteriorating, or because it is stably resistant to improvement.

This community level evaluation process might yield several opportunities to address real needs with solutions that have strong evidence behind them, and then Excellence in Giving works with the client to select among those opportunities. The decision between the best contenders tends to be made based on personal values and beliefs.

Sometimes these evaluations are done at the geographic location level, open to any population and focus area (e.g. any problem at all in Chicago that is relevant to any population). Excellence in Giving is currently working on a Community Solutions Assessment in Glynn County Georgia for a local client. They have also done these evaluations with a specific focus area in mind, but in any geographic location (e.g. human trafficking anywhere in the world). They do not have experience doing this kind of open-ended work in medical research. They have not done fully open-ended research on any geographic location, any population, and any focus area. Paul suspects this would be prohibitively difficult to do well and would involve highly questionable judgment calls. They have not had demand for this type of research.

Paul is familiar with effective altruism and says they do not frequently encounter people with the kind of utilitarian mindset that might make extremely open-ended, cause-neutral research attractive. Paul is sympathetic to the motives, but skeptical of the feasibility that research program given the number of highly controversial judgment calls that would be involved. It is easy to rank children dying in Africa ahead of wild donkey preservation efforts (contra Rockefeller Philanthropy Advisors purely issue-agnostic stance), but creating some calculation of value that can rank all issues in all geographies among all vulnerable populations is ultimately subjective and uncertain. He mentions the Hewlett Foundation until recently had a program to encourage effective philanthropy, but abandoned it because they were unsatisfied with the results.

The evaluations that Excellence in Giving does are the property of the clients who pay for the evaluations. Excellence in Giving is happy for clients to make this research publicly available if they wish. Sometimes clients have been quite enthusiastic about publicizing the reports, for instance making websites to showcase them. However, most clients want the organizations to improve after reviewing the critiques and recommendations rather than be pigeonholed with all their problems. Most charities Excellence in Giving evaluates issue a written response with action steps for improvement so the evaluation’s sponsor knows how he or she helped make them more effective.

If Excellence in Giving was serving a philanthropist who wanted to put more resources into finding good opportunities within an area, there is much more work that could be done. That is, at the usual scale of such investigations, they are not reaching very diminishing returns to research. There is always more primary research about a community that can be done to judge what is most needed and what is actually driving transformation. That is one reason Excellence in Giving sets up Outcomes Measurement processes for grantees. They want clients to know if they are really making a sustainable difference in the lives of beneficiaries or just an annual donation to a charity’s budget.

Clients

Before investigating potential interventions, Excellence in Giving endeavors to thoroughly understand a philanthropist’s personal journey and formation of values. As part of this process, the philanthropist completes a detailed survey, covering their lifetime experiences, professional background, interests, beliefs and values. Paul believes it is important to meet people where they are through this process, even if the long-term goal is to educate them toward more effective and strategic giving priorities.

Nobody ever hears their pitch and says ‘why would you want to do that?’. Almost everyone agrees with Excellence in Giving’s goals in principle. But saying you want to support effective organizations solving the world’s greatest problems (where possible) is different than investing the time and money to do so. As the Money for Good 2010 study found, 85% of funders might want to fund effective organizations but only 3% compare the effectiveness of charities when determining who to support.

Resources they use

There is a huge amount of information available already, in the form of academic articles, across many areas. However it is quite hard to find and use. There may be ways to organize it that would make this work easier. Associations of funders like the Philanthropy Roundtable in the USA do issue papers on effective interventions to support but no comprehensive, easily accessible and searchable repository for effective interventions to fund for different issues, geographies and populations exists.

Similarities and differences to other organizations

Excellence in Giving’s research is substantially more in depth than most family foundations conduct for their own purposes. They have also developed sophisticated tools for assessing organizational health, a focus that they believe sets them apart from for instance GiveWell. Paul expects they would be more concerned than GiveWell if an organization with a good program historically had a change in leadership for instance. Excellence in Giving is also less inclined to publish a general ‘top three’ list, as they are uncomfortable making the arbitrary judgment calls required for doing this across all issues, geographies and populations.

There are probably a dozen Philanthropic advisory firms like Excellence in Giving, and a large number of individuals who do similar consulting work. (There is a good website at which to see such groups here) Some of these individuals and firms create conflicts of interest by doing philanthropic advisory work interspersed with working for the charities which seek funding from philanthropists.

How could someone get into this kind of work?

Excellence in Giving is always looking for interns and currently hiring for an entry-level research position. Someone qualified to do this work would be capable of analyzing academic research relevant to philanthropic advising and presenting it in a way that would be relevant and compelling to someone from the business world. It’s important that someone doing this work is willing to discriminate between philanthropic opportunities, rather than being enthusiastic about all options. They generally prefer candidates with some experience in academic research at the master’s level and some work experience, though this would not be a strict requirement and decisions would be made on a case-by-case basis. To a certain extent, some people are just ‘wired differently’ in a way that makes them good at this work. They naturally analyze, read between the lines, can focus on problems to solve for days on end and uncommonly have common sense about what works in the real world.

What future opportunities would be available to someone who worked in this space?

Someone who took a research job at Excellence in Giving could advance within the company since the firm continues to grow its top line revenue by double digits every year. They would also be a natural candidate for work as a program officer at a foundation. Experience in philanthropic advising and evaluation gives people who wanted to run a non-profit both knowledge and wisdom about best practices in different program areas and contexts.