High ulterior motives

People with ulterior motives are often treated with suspicion and contempt. In a world driven round substantially by ulterior motives, this can lead to dispair, both for the ulteriorly motivated and the suspicious and contemptuous. How terrible to not be able to trust your friends, your brothers, yourself!

At this point it is worth noting two things:

  1. This kind of extreme concern about everyone being corrupt only seems to concern more philosophically minded people. This suggests that few practical problems arise from everyone having poor motives.
  2. In general, if you have a scorecard on which you always score zero, it is likely that you are not using the most useful scoring system. You shouldn’t neccessarily change your overall goal of doing better on that metric, but for now it might be convenient to differentiate the space within ‘zero’.

It seems to me that an important way in which ulterior motives vary is the extent to which they align with the non-ulterior motives you would like the person in question to have.

Suppose you would like to leave your small child with a babysitter, Sam. Unfortunately, you have learned that Sam is not motivated purely by the desire to care for your child. He has an ulterior motive for agreeing to babysit. How much does this trouble you?

  • If Sam runs a babysitting company, and really he just wants his babysitting company to thrive, then you should basically not be concerned at all.
  • If Sam just wants to try out babysitting once, to see what it’s like, you should be more concerned.
  • If Sam really just wants a chance to use your big-screen TV this one time you should be even more concerned.
  • If Sam just wants a chance to steal your baby so that he can sell it on the black market, you should be truly very worried.

Your worry in these cases tracks the extent to which Sam’s ulterior motives will cause him to do exactly what he would do if he just fundamentally wanted to care for your baby. If he wants his business to go well, he will do what you want him to do, to the extent that you can tell and are willing to pay. If he wants to try out babysitting, he will probably at least hang out with your child and do the basic babysitting motions. If he wants to use your TV, there’s not much reason he will do anything besides spend part of the evening in the same building as your child. If he wants to steal your child, his motives diverge from yours from the moment he arrives at your house.

I claim that in general, ulterior motives are more troubling to us – and should be – if they are less well aligned with the purported high motives. I suspect they also feel more ‘ulterior’ when they are less well aligned, both to the person who has them and to the observer.

Ulterior motives like ‘make money’ and ‘get respect’ tend to be relatively well aligned I think. If you are aiming to do task X, but really you just want respect, and your actions or success at X will be visible to someone who might give you respect, then you will act like a person who wants to do task X, down to (at least some) minor details.

Ulterior motives that are troubling tend not to be well aligned with purported motives. Either in the sense that the person will not do the thing they are purporting to care about, or often in the sense that they will do it, but simultaneously do something you don’t want.

For instance, suppose you give me a compliment, with the hope that I will then help you move house. Your overt motive is implicitly something like honestly communicating to me, while your ulterior motive is to get moving help. A compliment motivated by your ulterior motive will probably not also be honest communication with me, so your behavior hardly aligns with your overt motive at all. On top of that, your ulterior motive means you will try to cause me to help you move house, a random other thing I don’t want to happen which has nothing to do with your overt motive.

This is not the only axis on which ulterior motives are better or worse. A different kind of reason ulterior motives might be particularly bad is if it is the motives that matter to you, rather than the behavior. For instance, if a person is merely friends with you to get your money, regardless of how friendly this makes them, you may be dissatisfied.

I think it would be better if we distinguished ‘low ulterior motives’ – which involve hardly caring about the overt goal – from ‘high ulterior motives’ – which are closely aligned with the avert goal consistently across many circumstances. Some people (perhaps read ‘all people’) pretend that they want to do what is best for the world, when in fact they also strongly want to be respected and praised and so on. Some people want to steal your baby. Conflating the two does not seem great, terminologically or psychologically.

I’m not saying that we shouldn’t criticize motives like desire for respect or money (or that we should). I merely suggest that if we want to do those things, we criticize these motives on their own merits, rather than lumping them in with much more hazardous low motives and cheaply criticizing ‘ulterior motives’ in general. 


No ulterior motives

AI surprise research: high value?

If artificial intelligence was about to become ‘human-level’, do you think we (society) would get advance notice? Would artificial intelligence researchers have been talking about it for years? Would tell-tale precursor technologies have triggered the alert? Would it be completely unsurprising, because AI’s had been able to do almost everything that humans could do for a decade, and catching up at a steady pace?

Whether we would be surprised then seems to make a big difference to what we should do now. Suppose that there are things someone should do before human-level AI appears (a premise to most current efforts to mitigate AI impacts). If there will be a period in which many people anticipate human-level AI soon, then probably someone will do the highest priority things. If you try to do them now, you might replace them or just fail because it is hard to see what needs doing so far ahead of time. So if you think AI will not be surprising, then the best things to do regarding AI now will tend to be the high value things which require a longer lead time. This might include building better institutions and capabilities; shifting AI research trajectories; doing technical work that is hard to parallelize; or looking for ways to get the clear warning earlier.

By Anders Sandberg (http://www.aleph.se/andart/archives/2006/10/warning_signs_for_tomorrow.html)

Anders Sandberg has put some thought into warning signs for AI.

On the other hand, if the advent of human-level AI was very surprising, then only a small group of people will ever respond to the anticipation of human-level AI (including those who are already concerned about it). This makes it more likely that a person who anticipates human-level AI now – as a member of that small group – should work on the highest priority things that will need to be done about it ever. This might include object-level tools for controlling moderately powerful intelligences, or design of features that would lower any risks of those intelligences.

I have just argued that the best approach for dealing with concerns about human-level AI should be depend on how surprising we expect it to be. I also think there are relatively cheap ways to shed light on this question that (as far as I know) haven’t received much attention. For instance one could investigate:

  1. How well can practitioners in related areas usually predict upcoming developments? (especially for large developments, and for closely related fields)
  2. To what extent is progress in AI driven by conceptual progress, and to what extent is it driven by improvements in hardware?
  3. Do these happen in parallel for a given application, or e.g. does some level of hardware development prompt software development?
  4. Looking at other areas of technological development, what warnings have ever been visible of large otherwise surprising changes? What properties go along with surprising changes?
  5. What kinds of evidence of upcoming change motivate people to action, historically?
  6. What is the historical prevalence of discontinuous progress in analogous areas (e.g. technology in general, software, algorithms (I’ve investigated this a bit); very preliminary results suggest discontinuous progress is rare)

Whether brain emulations, brain-inspired AI, or more artificial AI come first is also relevant to this question, as are our expectations about the time until human-level AI appears. So investigations which shed light on those issues should also shed light on this one.

Several of the questions above might be substantially clarified with less than a person-year of effort. With that degree of sophistication I think they have a good chance of changing our best guess about the degree of warning to expect, and perhaps about what people concerned about AI risk should do now. Such a shift seem valuable.

I have claimed that the surprisingness of human-level AI makes quite a difference to what those concerned about AI risk should do now, and that learning more about this surprisingness is cheap and neglected. So it seems pretty plausible to me that investigating the likely surprisingness of human-level AI is a better deal at this point than acting on our current understanding.

I haven’t made a watertight case for the superiority of research into AI surprise, and don’t necessarily believe it. I have gestured at a case however. Do you think it is wrong? Do you know of work on these topics that I should know about?

happy AI day

Conversation with Paul Penley of Excellence in Giving

Cross posted from 80,000 Hours Blog. Part of the Cause Prioritization Shallow, all parts of which are available here. Previously in this series, conversations with Owen Cotton-Barratt and Paul Christiano.

 

Participants

  • Paul Penley: Director of Research, Excellence in Giving
  • Katja Grace: Research Assistant, Machine Intelligence Research Institute
  • Nick Beckstead: Research Fellow, Future of Humanity Institute; Board of Trustees, Center for Effective Altruism

Notes

This is a summary of Paul Penley’s points in a conversation on April 3, 2014, written by Katja with substantial help from the other participants.

What kind of philanthropic advising work does Excellence in Giving do?

Excellence in Giving is a philanthropic advisory firm with around seven staff members. Around 20 families retain them to act in the place of foundation staff, and other foundations consult with them on specific issues. Excellence in Giving provides an experienced staff who can share what they have learned serving clients for the past 12 years with new family foundations. They don’t manage money, but they do track giving, structure grants, research grant effectiveness, produce grant impact reports and plan experiences for clients to see and celebrate their giving’s impact.

The research department at Excellence in Giving evaluates nonprofit organizations, performs community solutions assessments and sets up outcomes measurement processes for grantees to ensure its clients support strategic, well-managed charities making a difference. Evaluations at the organization level are discussed online here. Such organizational evaluations have taken place at after school programs in Iowa, colleges in Oxford, children’s charities in Uganda and Kenya, and charitable trusts in north India.

Assessments of community needs and solutions tend to focus on a geographic location (e.g. Chicago), a population to be served (e.g. inner-city youths), and a focus area (e.g. early childhood development). They will identify problems among the population to be served that are related to the focus area, organizations that work on those problems, and solutions to those problems that have evidence of helping to alleviate the problems in question. This involves making tough judgement calls with data in hand from quantitative and qualitative research methods. Often an intervention is valuable, but enough work is already being done on it. This judgement call has to be made, but it can be hard for people working on an issue to hear that it’s not strategic to put more effort into a problem while the problem persists.

It is important in this process to distinguish the senses in which a problem seems bad. Sometimes a problem seems bad because a situation is suboptimal, though it may be rapidly getting better. This should be distinguished from something which seems bad because it is deteriorating, or because it is stably resistant to improvement.

This community level evaluation process might yield several opportunities to address real needs with solutions that have strong evidence behind them, and then Excellence in Giving works with the client to select among those opportunities. The decision between the best contenders tends to be made based on personal values and beliefs.

Sometimes these evaluations are done at the geographic location level, open to any population and focus area (e.g. any problem at all in Chicago that is relevant to any population). Excellence in Giving is currently working on a Community Solutions Assessment in Glynn County Georgia for a local client. They have also done these evaluations with a specific focus area in mind, but in any geographic location (e.g. human trafficking anywhere in the world). They do not have experience doing this kind of open-ended work in medical research. They have not done fully open-ended research on any geographic location, any population, and any focus area. Paul suspects this would be prohibitively difficult to do well and would involve highly questionable judgment calls. They have not had demand for this type of research.

Paul is familiar with effective altruism and says they do not frequently encounter people with the kind of utilitarian mindset that might make extremely open-ended, cause-neutral research attractive. Paul is sympathetic to the motives, but skeptical of the feasibility that research program given the number of highly controversial judgment calls that would be involved. It is easy to rank children dying in Africa ahead of wild donkey preservation efforts (contra Rockefeller Philanthropy Advisors purely issue-agnostic stance), but creating some calculation of value that can rank all issues in all geographies among all vulnerable populations is ultimately subjective and uncertain. He mentions the Hewlett Foundation until recently had a program to encourage effective philanthropy, but abandoned it because they were unsatisfied with the results.

The evaluations that Excellence in Giving does are the property of the clients who pay for the evaluations. Excellence in Giving is happy for clients to make this research publicly available if they wish. Sometimes clients have been quite enthusiastic about publicizing the reports, for instance making websites to showcase them. However, most clients want the organizations to improve after reviewing the critiques and recommendations rather than be pigeonholed with all their problems. Most charities Excellence in Giving evaluates issue a written response with action steps for improvement so the evaluation’s sponsor knows how he or she helped make them more effective.

If Excellence in Giving was serving a philanthropist who wanted to put more resources into finding good opportunities within an area, there is much more work that could be done. That is, at the usual scale of such investigations, they are not reaching very diminishing returns to research. There is always more primary research about a community that can be done to judge what is most needed and what is actually driving transformation. That is one reason Excellence in Giving sets up Outcomes Measurement processes for grantees. They want clients to know if they are really making a sustainable difference in the lives of beneficiaries or just an annual donation to a charity’s budget.

Clients

Before investigating potential interventions, Excellence in Giving endeavors to thoroughly understand a philanthropist’s personal journey and formation of values. As part of this process, the philanthropist completes a detailed survey, covering their lifetime experiences, professional background, interests, beliefs and values. Paul believes it is important to meet people where they are through this process, even if the long-term goal is to educate them toward more effective and strategic giving priorities.

Nobody ever hears their pitch and says ‘why would you want to do that?’. Almost everyone agrees with Excellence in Giving’s goals in principle. But saying you want to support effective organizations solving the world’s greatest problems (where possible) is different than investing the time and money to do so. As the Money for Good 2010 study found, 85% of funders might want to fund effective organizations but only 3% compare the effectiveness of charities when determining who to support.

Resources they use

There is a huge amount of information available already, in the form of academic articles, across many areas. However it is quite hard to find and use. There may be ways to organize it that would make this work easier. Associations of funders like the Philanthropy Roundtable in the USA do issue papers on effective interventions to support but no comprehensive, easily accessible and searchable repository for effective interventions to fund for different issues, geographies and populations exists.

Similarities and differences to other organizations

Excellence in Giving’s research is substantially more in depth than most family foundations conduct for their own purposes. They have also developed sophisticated tools for assessing organizational health, a focus that they believe sets them apart from for instance GiveWell. Paul expects they would be more concerned than GiveWell if an organization with a good program historically had a change in leadership for instance. Excellence in Giving is also less inclined to publish a general ‘top three’ list, as they are uncomfortable making the arbitrary judgment calls required for doing this across all issues, geographies and populations.

There are probably a dozen Philanthropic advisory firms like Excellence in Giving, and a large number of individuals who do similar consulting work. (There is a good website at which to see such groups here) Some of these individuals and firms create conflicts of interest by doing philanthropic advisory work interspersed with working for the charities which seek funding from philanthropists.

How could someone get into this kind of work?

Excellence in Giving is always looking for interns and currently hiring for an entry-level research position. Someone qualified to do this work would be capable of analyzing academic research relevant to philanthropic advising and presenting it in a way that would be relevant and compelling to someone from the business world. It’s important that someone doing this work is willing to discriminate between philanthropic opportunities, rather than being enthusiastic about all options. They generally prefer candidates with some experience in academic research at the master’s level and some work experience, though this would not be a strict requirement and decisions would be made on a case-by-case basis. To a certain extent, some people are just ‘wired differently’ in a way that makes them good at this work. They naturally analyze, read between the lines, can focus on problems to solve for days on end and uncommonly have common sense about what works in the real world.

What future opportunities would be available to someone who worked in this space?

Someone who took a research job at Excellence in Giving could advance within the company since the firm continues to grow its top line revenue by double digits every year. They would also be a natural candidate for work as a program officer at a foundation. Experience in philanthropic advising and evaluation gives people who wanted to run a non-profit both knowledge and wisdom about best practices in different program areas and contexts.

Shallow investigation into cause prioritization

Cross-posted at 80,000 Hours Blog

I recently conducted a ‘shallow investigation*’ into cause prioritization, with help from Nick Beckstead. You can read it and various related interview notes here. It covers the importance of cause prioritization; who is doing it, funding it, or using it; and opportunities to contribute. This blog post is a summary of my impressions, given the findings of the investigation.  

*see GiveWell.


cause prioritization 2

Summary

Cause prioritization research seems likely enough to be high value to warrant further investigation. It appears that roughly billions of dollars per year might be influenced by it in just the near future, that current efforts cost a few million dollars per year and are often influential, and that there are many plausible ways to contribute. It also seems like things are likely to get better in the future, as more work is done.

Funding which might be substantially influenced by cause prioritization research is probably worth at least several billion dollars per year.

Prioritization research is likely more relevant to new funders than established ones, but we can get an idea of the scale of new cost-effectiveness sensitive philanthropists by looking at existing ones. This study found nine private funders (or groups of them) who appear to care about cost-effectiveness, and did not look that hard. The Gates Foundation spends around $3.4bn annually, The Hewlett Foundation spent $304M in 2012, and Good Ventures around $10m in 2014. The others spent less or I did not find data on them. Given these figures, it seems reasonable to expect more foundations worth hundreds of millions of dollars per year in the future, who care about cost-effectiveness.

I think funders not explicitly focused on cost-effectiveness are probably also influenced by prevailing beliefs about cause effectiveness, which are likely influenced (gradually) by research. $300bn is spent annually on philanthropy in the US, and probably somewhat less than twice that much is spent globally. Development assistance is at least $125bn per year. Government domestic spending on some kinds of programs is also a worthy target for prioritization research, not measured here.

Few organizations work on cause prioritization.

Those identified in this study to be working directly on public cause prioritization research have a total annual budget of around $3M. Philanthropists also sometimes invest in private cause prioritization, and other kinds of organizations do related work.

Current efforts are plausibly successful, though I haven’t investigated a lot

There are some examples of cause prioritization redirecting large volumes of funding, however I do not know of enough such examples, or know enough about how the money moved, to be confident that the cost has been worth it. We are told of cases of prioritization influencing $4bn of government spending, and substantially moving $208M of government spending and $750M of private funding. These are largely from the Copenhagen Consensus Center (CCC), which has probably spent very roughly $15M ever. If we suppose their only output was moving $820M (i.e. if we ignore their apparent influence on billions of dollars, and all other less clear cases of influence), then in order to break even, they would need to improve the quality of that spending by 1.8%. This seems very plausible, though I have not investigated the strength of their evidence. To be a highly effective use of money they would have to do better. However on the other hand, they appear to have many other good effects, and everyone seems to agree that most of the value from cause prioritization should come in the future anyway, since we are only just learning how to do it.

Many opportunities appear to exist for contributing to cause prioritization.

Existing organizations seek funding, and cause prioritization seems amenable to small-scale research projects, such as individual researchers working. A large variety of research approaches and questions are plausibly valuable, and experimentation is probably unusually good, given the early stage of the research. There are also a variety of non-research routes to contributing to cause prioritization, such as encouraging people to use it, arranging for other researchers to use comparable metrics, organizing relevant academic research, outreach such as workshops at foundation conferences, and encouraging sharing of private research. This project has not investigated the value of any of these specific ideas.

My own views

If I had some money to spend, cause prioritization is one of the top ways I would consider spending it. For an example of the kind of thing I’d consider: I think a pilot project investigating the indirect and long-term effects of relevant actions would be valuable. For instance, when you give cash to a person in the developing world, does it help the country develop faster? Does it change the population? Does it make the world better or worse than you would expect if you just looked at that person’s wellbeing?

Many findings in this area seem applicable to evaluating a range of causes, and would probably remain applicable for a long time. There appears to be relevant academic research (e.g. into the effects of economic growth on violence, or on the extent to which sub-optimal standards persist), and many people suspect long run effects are important, yet when choosing interventions it is common to ignore them. There is disagreement over whether this kind of research is tractable, and I think it is worth checking more thoroughly.

This project would not involve prioritizing causes directly, however it would provide an important building block in the prioritization of many causes, and I expect would reveal some valuable interventions that we were not thinking about. I’m not sure if this is among the best suggestions in the longer document. But it is an example of the kind of project that appears to be tractable, cheap, and promising.

Announcement: Superintelligence reading group

I will be running an online reading group on Nick Bostrom‘s new book, Superintelligence, on behalf of MIRI (my employer). Please join me! I append the post from MIRI’s blog, with more details.


owl1 copy

Nick Bostrom’s eagerly awaited Superintelligence comes out in the US this week. To help you get the most out of it, MIRI is running an online reading group where you can join with others to ask questions, discuss ideas, and probe the arguments more deeply.

The reading group will “meet” on a weekly post on the LessWrong discussion forum. For each ‘meeting’, we will read about half a chapter of Superintelligence, then come together virtually to discuss. I’ll summarize the chapter, and offer a few relevant notes, thoughts, and ideas for further investigation. (My notes will also be used as the source material for the final reading guide for the book.)

Discussion will take place in the comments. I’ll offer some questions, and invite you to bring your own, as well as thoughts, criticisms and suggestions for interesting related material. Your contributions to the reading group might also (with permission) be used in our final reading guide for the book.

We welcome both newcomers and veterans on the topic. Content will aim to be intelligible to a wide audience, and topics will range from novice to expert level. All levels of time commitment are welcome.

We will follow this preliminary reading guide, produced by MIRI, reading one section per week.

If you have already read the book, don’t worry! To the extent you remember what it says, your superior expertise will only be a bonus. To the extent you don’t remember what it says, now is a good time for a review! If you don’t have time to read the book, but still want to participate, you are also welcome to join in. I will provide summaries, and many things will have page numbers, in case you want to skip to the relevant parts.

If this sounds good to you, first grab a copy of Superintelligence. You may also want to sign up here to be emailed when the discussion begins each week. The first virtual meeting (forum post) will go live at 6pm Pacific on Monday, September 15th. Following meetings will start at 6pm every Monday, so if you’d like to coordinate for quick fire discussion with others, put that into your calendar. If you prefer flexibility, come by any time! And remember that if there are any people you would especially enjoy discussing Superintelligence with, link them to this post!

Topics for the first week will include impressive displays of artificial intelligence, why computers play board games so well, and what a reasonable person should infer from the agricultural and industrial revolutions.