AI surprise research: high value?

If artificial intelligence was about to become ‘human-level’, do you think we (society) would get advance notice? Would artificial intelligence researchers have been talking about it for years? Would tell-tale precursor technologies have triggered the alert? Would it be completely unsurprising, because AI’s had been able to do almost everything that humans could do for a decade, and catching up at a steady pace?

Whether we would be surprised then seems to make a big difference to what we should do now. Suppose that there are things someone should do before human-level AI appears (a premise to most current efforts to mitigate AI impacts). If there will be a period in which many people anticipate human-level AI soon, then probably someone will do the highest priority things. If you try to do them now, you might replace them or just fail because it is hard to see what needs doing so far ahead of time. So if you think AI will not be surprising, then the best things to do regarding AI now will tend to be the high value things which require a longer lead time. This might include building better institutions and capabilities; shifting AI research trajectories; doing technical work that is hard to parallelize; or looking for ways to get the clear warning earlier.

By Anders Sandberg (http://www.aleph.se/andart/archives/2006/10/warning_signs_for_tomorrow.html)

Anders Sandberg has put some thought into warning signs for AI.

On the other hand, if the advent of human-level AI was very surprising, then only a small group of people will ever respond to the anticipation of human-level AI (including those who are already concerned about it). This makes it more likely that a person who anticipates human-level AI now – as a member of that small group – should work on the highest priority things that will need to be done about it ever. This might include object-level tools for controlling moderately powerful intelligences, or design of features that would lower any risks of those intelligences.

I have just argued that the best approach for dealing with concerns about human-level AI should be depend on how surprising we expect it to be. I also think there are relatively cheap ways to shed light on this question that (as far as I know) haven’t received much attention. For instance one could investigate:

  1. How well can practitioners in related areas usually predict upcoming developments? (especially for large developments, and for closely related fields)
  2. To what extent is progress in AI driven by conceptual progress, and to what extent is it driven by improvements in hardware?
  3. Do these happen in parallel for a given application, or e.g. does some level of hardware development prompt software development?
  4. Looking at other areas of technological development, what warnings have ever been visible of large otherwise surprising changes? What properties go along with surprising changes?
  5. What kinds of evidence of upcoming change motivate people to action, historically?
  6. What is the historical prevalence of discontinuous progress in analogous areas (e.g. technology in general, software, algorithms (I’ve investigated this a bit); very preliminary results suggest discontinuous progress is rare)

Whether brain emulations, brain-inspired AI, or more artificial AI come first is also relevant to this question, as are our expectations about the time until human-level AI appears. So investigations which shed light on those issues should also shed light on this one.

Several of the questions above might be substantially clarified with less than a person-year of effort. With that degree of sophistication I think they have a good chance of changing our best guess about the degree of warning to expect, and perhaps about what people concerned about AI risk should do now. Such a shift seem valuable.

I have claimed that the surprisingness of human-level AI makes quite a difference to what those concerned about AI risk should do now, and that learning more about this surprisingness is cheap and neglected. So it seems pretty plausible to me that investigating the likely surprisingness of human-level AI is a better deal at this point than acting on our current understanding.

I haven’t made a watertight case for the superiority of research into AI surprise, and don’t necessarily believe it. I have gestured at a case however. Do you think it is wrong? Do you know of work on these topics that I should know about?

happy AI day

Conversation with Paul Penley of Excellence in Giving

Cross posted from 80,000 Hours Blog. Part of the Cause Prioritization Shallow, all parts of which are available here. Previously in this series, conversations with Owen Cotton-Barratt and Paul Christiano.

 

Participants

  • Paul Penley: Director of Research, Excellence in Giving
  • Katja Grace: Research Assistant, Machine Intelligence Research Institute
  • Nick Beckstead: Research Fellow, Future of Humanity Institute; Board of Trustees, Center for Effective Altruism

Notes

This is a summary of Paul Penley’s points in a conversation on April 3, 2014, written by Katja with substantial help from the other participants.

What kind of philanthropic advising work does Excellence in Giving do?

Excellence in Giving is a philanthropic advisory firm with around seven staff members. Around 20 families retain them to act in the place of foundation staff, and other foundations consult with them on specific issues. Excellence in Giving provides an experienced staff who can share what they have learned serving clients for the past 12 years with new family foundations. They don’t manage money, but they do track giving, structure grants, research grant effectiveness, produce grant impact reports and plan experiences for clients to see and celebrate their giving’s impact.

The research department at Excellence in Giving evaluates nonprofit organizations, performs community solutions assessments and sets up outcomes measurement processes for grantees to ensure its clients support strategic, well-managed charities making a difference. Evaluations at the organization level are discussed online here. Such organizational evaluations have taken place at after school programs in Iowa, colleges in Oxford, children’s charities in Uganda and Kenya, and charitable trusts in north India.

Assessments of community needs and solutions tend to focus on a geographic location (e.g. Chicago), a population to be served (e.g. inner-city youths), and a focus area (e.g. early childhood development). They will identify problems among the population to be served that are related to the focus area, organizations that work on those problems, and solutions to those problems that have evidence of helping to alleviate the problems in question. This involves making tough judgement calls with data in hand from quantitative and qualitative research methods. Often an intervention is valuable, but enough work is already being done on it. This judgement call has to be made, but it can be hard for people working on an issue to hear that it’s not strategic to put more effort into a problem while the problem persists.

It is important in this process to distinguish the senses in which a problem seems bad. Sometimes a problem seems bad because a situation is suboptimal, though it may be rapidly getting better. This should be distinguished from something which seems bad because it is deteriorating, or because it is stably resistant to improvement.

This community level evaluation process might yield several opportunities to address real needs with solutions that have strong evidence behind them, and then Excellence in Giving works with the client to select among those opportunities. The decision between the best contenders tends to be made based on personal values and beliefs.

Sometimes these evaluations are done at the geographic location level, open to any population and focus area (e.g. any problem at all in Chicago that is relevant to any population). Excellence in Giving is currently working on a Community Solutions Assessment in Glynn County Georgia for a local client. They have also done these evaluations with a specific focus area in mind, but in any geographic location (e.g. human trafficking anywhere in the world). They do not have experience doing this kind of open-ended work in medical research. They have not done fully open-ended research on any geographic location, any population, and any focus area. Paul suspects this would be prohibitively difficult to do well and would involve highly questionable judgment calls. They have not had demand for this type of research.

Paul is familiar with effective altruism and says they do not frequently encounter people with the kind of utilitarian mindset that might make extremely open-ended, cause-neutral research attractive. Paul is sympathetic to the motives, but skeptical of the feasibility that research program given the number of highly controversial judgment calls that would be involved. It is easy to rank children dying in Africa ahead of wild donkey preservation efforts (contra Rockefeller Philanthropy Advisors purely issue-agnostic stance), but creating some calculation of value that can rank all issues in all geographies among all vulnerable populations is ultimately subjective and uncertain. He mentions the Hewlett Foundation until recently had a program to encourage effective philanthropy, but abandoned it because they were unsatisfied with the results.

The evaluations that Excellence in Giving does are the property of the clients who pay for the evaluations. Excellence in Giving is happy for clients to make this research publicly available if they wish. Sometimes clients have been quite enthusiastic about publicizing the reports, for instance making websites to showcase them. However, most clients want the organizations to improve after reviewing the critiques and recommendations rather than be pigeonholed with all their problems. Most charities Excellence in Giving evaluates issue a written response with action steps for improvement so the evaluation’s sponsor knows how he or she helped make them more effective.

If Excellence in Giving was serving a philanthropist who wanted to put more resources into finding good opportunities within an area, there is much more work that could be done. That is, at the usual scale of such investigations, they are not reaching very diminishing returns to research. There is always more primary research about a community that can be done to judge what is most needed and what is actually driving transformation. That is one reason Excellence in Giving sets up Outcomes Measurement processes for grantees. They want clients to know if they are really making a sustainable difference in the lives of beneficiaries or just an annual donation to a charity’s budget.

Clients

Before investigating potential interventions, Excellence in Giving endeavors to thoroughly understand a philanthropist’s personal journey and formation of values. As part of this process, the philanthropist completes a detailed survey, covering their lifetime experiences, professional background, interests, beliefs and values. Paul believes it is important to meet people where they are through this process, even if the long-term goal is to educate them toward more effective and strategic giving priorities.

Nobody ever hears their pitch and says ‘why would you want to do that?’. Almost everyone agrees with Excellence in Giving’s goals in principle. But saying you want to support effective organizations solving the world’s greatest problems (where possible) is different than investing the time and money to do so. As the Money for Good 2010 study found, 85% of funders might want to fund effective organizations but only 3% compare the effectiveness of charities when determining who to support.

Resources they use

There is a huge amount of information available already, in the form of academic articles, across many areas. However it is quite hard to find and use. There may be ways to organize it that would make this work easier. Associations of funders like the Philanthropy Roundtable in the USA do issue papers on effective interventions to support but no comprehensive, easily accessible and searchable repository for effective interventions to fund for different issues, geographies and populations exists.

Similarities and differences to other organizations

Excellence in Giving’s research is substantially more in depth than most family foundations conduct for their own purposes. They have also developed sophisticated tools for assessing organizational health, a focus that they believe sets them apart from for instance GiveWell. Paul expects they would be more concerned than GiveWell if an organization with a good program historically had a change in leadership for instance. Excellence in Giving is also less inclined to publish a general ‘top three’ list, as they are uncomfortable making the arbitrary judgment calls required for doing this across all issues, geographies and populations.

There are probably a dozen Philanthropic advisory firms like Excellence in Giving, and a large number of individuals who do similar consulting work. (There is a good website at which to see such groups here) Some of these individuals and firms create conflicts of interest by doing philanthropic advisory work interspersed with working for the charities which seek funding from philanthropists.

How could someone get into this kind of work?

Excellence in Giving is always looking for interns and currently hiring for an entry-level research position. Someone qualified to do this work would be capable of analyzing academic research relevant to philanthropic advising and presenting it in a way that would be relevant and compelling to someone from the business world. It’s important that someone doing this work is willing to discriminate between philanthropic opportunities, rather than being enthusiastic about all options. They generally prefer candidates with some experience in academic research at the master’s level and some work experience, though this would not be a strict requirement and decisions would be made on a case-by-case basis. To a certain extent, some people are just ‘wired differently’ in a way that makes them good at this work. They naturally analyze, read between the lines, can focus on problems to solve for days on end and uncommonly have common sense about what works in the real world.

What future opportunities would be available to someone who worked in this space?

Someone who took a research job at Excellence in Giving could advance within the company since the firm continues to grow its top line revenue by double digits every year. They would also be a natural candidate for work as a program officer at a foundation. Experience in philanthropic advising and evaluation gives people who wanted to run a non-profit both knowledge and wisdom about best practices in different program areas and contexts.

Shallow investigation into cause prioritization

Cross-posted at 80,000 Hours Blog

I recently conducted a ‘shallow investigation*’ into cause prioritization, with help from Nick Beckstead. You can read it and various related interview notes here. It covers the importance of cause prioritization; who is doing it, funding it, or using it; and opportunities to contribute. This blog post is a summary of my impressions, given the findings of the investigation.  

*see GiveWell.


cause prioritization 2

Summary

Cause prioritization research seems likely enough to be high value to warrant further investigation. It appears that roughly billions of dollars per year might be influenced by it in just the near future, that current efforts cost a few million dollars per year and are often influential, and that there are many plausible ways to contribute. It also seems like things are likely to get better in the future, as more work is done.

Funding which might be substantially influenced by cause prioritization research is probably worth at least several billion dollars per year.

Prioritization research is likely more relevant to new funders than established ones, but we can get an idea of the scale of new cost-effectiveness sensitive philanthropists by looking at existing ones. This study found nine private funders (or groups of them) who appear to care about cost-effectiveness, and did not look that hard. The Gates Foundation spends around $3.4bn annually, The Hewlett Foundation spent $304M in 2012, and Good Ventures around $10m in 2014. The others spent less or I did not find data on them. Given these figures, it seems reasonable to expect more foundations worth hundreds of millions of dollars per year in the future, who care about cost-effectiveness.

I think funders not explicitly focused on cost-effectiveness are probably also influenced by prevailing beliefs about cause effectiveness, which are likely influenced (gradually) by research. $300bn is spent annually on philanthropy in the US, and probably somewhat less than twice that much is spent globally. Development assistance is at least $125bn per year. Government domestic spending on some kinds of programs is also a worthy target for prioritization research, not measured here.

Few organizations work on cause prioritization.

Those identified in this study to be working directly on public cause prioritization research have a total annual budget of around $3M. Philanthropists also sometimes invest in private cause prioritization, and other kinds of organizations do related work.

Current efforts are plausibly successful, though I haven’t investigated a lot

There are some examples of cause prioritization redirecting large volumes of funding, however I do not know of enough such examples, or know enough about how the money moved, to be confident that the cost has been worth it. We are told of cases of prioritization influencing $4bn of government spending, and substantially moving $208M of government spending and $750M of private funding. These are largely from the Copenhagen Consensus Center (CCC), which has probably spent very roughly $15M ever. If we suppose their only output was moving $820M (i.e. if we ignore their apparent influence on billions of dollars, and all other less clear cases of influence), then in order to break even, they would need to improve the quality of that spending by 1.8%. This seems very plausible, though I have not investigated the strength of their evidence. To be a highly effective use of money they would have to do better. However on the other hand, they appear to have many other good effects, and everyone seems to agree that most of the value from cause prioritization should come in the future anyway, since we are only just learning how to do it.

Many opportunities appear to exist for contributing to cause prioritization.

Existing organizations seek funding, and cause prioritization seems amenable to small-scale research projects, such as individual researchers working. A large variety of research approaches and questions are plausibly valuable, and experimentation is probably unusually good, given the early stage of the research. There are also a variety of non-research routes to contributing to cause prioritization, such as encouraging people to use it, arranging for other researchers to use comparable metrics, organizing relevant academic research, outreach such as workshops at foundation conferences, and encouraging sharing of private research. This project has not investigated the value of any of these specific ideas.

My own views

If I had some money to spend, cause prioritization is one of the top ways I would consider spending it. For an example of the kind of thing I’d consider: I think a pilot project investigating the indirect and long-term effects of relevant actions would be valuable. For instance, when you give cash to a person in the developing world, does it help the country develop faster? Does it change the population? Does it make the world better or worse than you would expect if you just looked at that person’s wellbeing?

Many findings in this area seem applicable to evaluating a range of causes, and would probably remain applicable for a long time. There appears to be relevant academic research (e.g. into the effects of economic growth on violence, or on the extent to which sub-optimal standards persist), and many people suspect long run effects are important, yet when choosing interventions it is common to ignore them. There is disagreement over whether this kind of research is tractable, and I think it is worth checking more thoroughly.

This project would not involve prioritizing causes directly, however it would provide an important building block in the prioritization of many causes, and I expect would reveal some valuable interventions that we were not thinking about. I’m not sure if this is among the best suggestions in the longer document. But it is an example of the kind of project that appears to be tractable, cheap, and promising.

Announcement: Superintelligence reading group

I will be running an online reading group on Nick Bostrom‘s new book, Superintelligence, on behalf of MIRI (my employer). Please join me! I append the post from MIRI’s blog, with more details.


owl1 copy

Nick Bostrom’s eagerly awaited Superintelligence comes out in the US this week. To help you get the most out of it, MIRI is running an online reading group where you can join with others to ask questions, discuss ideas, and probe the arguments more deeply.

The reading group will “meet” on a weekly post on the LessWrong discussion forum. For each ‘meeting’, we will read about half a chapter of Superintelligence, then come together virtually to discuss. I’ll summarize the chapter, and offer a few relevant notes, thoughts, and ideas for further investigation. (My notes will also be used as the source material for the final reading guide for the book.)

Discussion will take place in the comments. I’ll offer some questions, and invite you to bring your own, as well as thoughts, criticisms and suggestions for interesting related material. Your contributions to the reading group might also (with permission) be used in our final reading guide for the book.

We welcome both newcomers and veterans on the topic. Content will aim to be intelligible to a wide audience, and topics will range from novice to expert level. All levels of time commitment are welcome.

We will follow this preliminary reading guide, produced by MIRI, reading one section per week.

If you have already read the book, don’t worry! To the extent you remember what it says, your superior expertise will only be a bonus. To the extent you don’t remember what it says, now is a good time for a review! If you don’t have time to read the book, but still want to participate, you are also welcome to join in. I will provide summaries, and many things will have page numbers, in case you want to skip to the relevant parts.

If this sounds good to you, first grab a copy of Superintelligence. You may also want to sign up here to be emailed when the discussion begins each week. The first virtual meeting (forum post) will go live at 6pm Pacific on Monday, September 15th. Following meetings will start at 6pm every Monday, so if you’d like to coordinate for quick fire discussion with others, put that into your calendar. If you prefer flexibility, come by any time! And remember that if there are any people you would especially enjoy discussing Superintelligence with, link them to this post!

Topics for the first week will include impressive displays of artificial intelligence, why computers play board games so well, and what a reasonable person should infer from the agricultural and industrial revolutions.

Conversation with Paul Christiano on Cause Prioritization Research

Paul ChristianoI talked to Paul Christiano about his views on cause prioritization as part of a ‘shallow investigation’ (inspired by Givewell) into cause prioritization which will be released soon. Notes from the conversation are cross-posted here from the 80000hours blog, and also available in other formats on my website. Previously in this series, a conversation with Owen Cotton-Barratt on GPP.


 

Participants

Paul Christiano: Computer science PhD student at UC Berkeley

Katja Grace: Research Assistant, Machine Intelligence Research Institute

Summary

This is a verbatim email conversation from the 26th of March 2014. Paul is a proponent of cause prioritization research. Here he explains his support of prioritization research, and makes some suggestions about how to do it.

Note: Paul is Katja’s boyfriend, so consider reading his inclusion as a relevant expert with a grain of salt.

Katja: How promising do you think cause prioritization is generally? Why?

Paul: Defined very broadly—all research that helps us choose what general areas we should be looking into for the best philanthropic impact—I think it is a very strong contender for best thing to be doing at the moment. This judgment is based on optimism about how much money could potentially be directed by the kind of case for impact which we could conceivably construct, but also the belief that there is a good chance that over the very long-term we can hope that the philanthropic community will be radically better-informed and more impactful (think many times) than it currently it is. If that’s the case, then it seems likely that a primary output of modern philanthropy is moving towards that point. This is not so much a story about quickly finding insights that let you find a particular opportunity that is twice as effective, and more a story of accumulating a body of expertise and information that has a very large payoff over the longer-term. I think that (not coincidentally) one can also give shorter-term justifications for prioritization vs. direct spending, which I also find quite compelling but perhaps not quite as much so.

Katja: Why do you think not enough is done already?

Paul: You could mean what evidence do I have that not enough is done, or what explanation can I offer for why not enough has been done even if it really is a good thing. I’m going to answer the second.

I think a very small fraction of philanthropists are motivated by a flexible or broad desire to do the most good in the world. So there aren’t too many people who we would expect to do this kind of thing. As a general rule there seems to be relatively little investment in expensive infrastructure which is primarily useful to other people, and relatively little investment in speculative projects that will take a long time and don’t have a great chance of paying off. I do think we are seeing more of this kind of thing in general recently, due to the same kinds of broader cultural shifts that have allowed the EA movement to get traction recently.

Katja: How much better do you think the very best interventions are likely to be than our current best guesses?

Paul: This kind of question is hard to answer due to ambiguity about “very best.” I’m sure there in some sense are very simple things you could do that are many orders of magnitude more cost-effective than interventions we currently support. So it seems like this really needs to necessarily be a question about investigative effort vs. effectiveness. In the very long-term, I would certainly not be surprised to discover that the most effective philanthropy in the future was ten or a hundred times more effective than contemporary philanthropy.

Katja: I believe you value methodological progress in this area highly. Is that true? What kind of methodological progress would be valuable?

Paul: There are a lot of ways you could go about figuring stuff out, and I expect most problems to be pretty hard without a long history of solving similar problems. Across fields, it seems like people get better at answering questions as they see what works and what doesn’t work to answer similar questions, they identify and improve the most effective approaches, and so on. This is stuff like, what questions do you ask to evaluate the attractiveness of a cause or intervention? Who do you talk to how much, and what kind of person do you hire to do how much thinking? How do you aggregate differing opinions, and what kind of provisional stance do you adopt to move forward in light of uncertainty? How confident an answer should you expect to get, and how should you prioritize spending time on simple issues vs. important issues? You could write down quite a lot of parameters which you can fiddle with as part of any effort to figure out “how promising is X?” and there are way more parameters that are harder to write down but inevitably come up if you actually sit down and try to do it. So there is a lot to figure out about how to go about the figuring out, and I would imagine that the primary impact of early efforts will be understanding what settings of those parameters are productive and accumulating expertise about how to attack the problem.

Katja: Why is it better to evaluate causes than interventions or charities?

Paul: I could give a number of different answers to this question, that is, I feel like a number of considerations point in this direction.

One is that evaluating charities typically requires a fairly deep understanding of the area in which they are working and the mechanism by which that charity will have an impact. That’s not the sort of thing you can build up in a month while you evaluate a charity, it seems to be the sort of thing that is expensive to acquire and is developed over funding many particular interventions. So one obvious issue is that you have to make choices about where to acquire that expertise and do that further investigation prior to being really equipped to evaluate particular opportunities (though this isn’t to say that looking at particular opportunities shouldn’t be a part of understanding how promising a cause is).

Another is that there are just too many charities to do this analysis for many of them, and the landscape is changing over time (this is also true for interventions though to a lesser extent). If you want to contribute to a useful collective understanding, information about these broader issues is just more broadly and robustly applicable. If you are just aiming to find a good thing to give to now this is not so much an issue, but if you are aiming to become better and better at this over time, judgments about individual charities are not that useful in and of themselves. Of course, while making such judgments you may acquire useful info about the bigger picture or make useful methodological progress.

My views on this question are largely based on a priori reasoning (and on all of these questions), which makes me very hesitant to speak authoritatively about them. But it’s worth noting that GiveWell has also reached the conclusion that a cause is a good level of analysis at least at the outset of an investigation, and their conclusion is more closely tied to very practical considerations about what happens when you actually try and conduct these investigations.

Katja: Can you point to past cause prioritization research that was high value? How did it produce value?

Paul: Three examples, of very different character:

  1. GiveWell has done research evaluating charities and interventions that has clearly had an effect at improving individuals’ giving and improving the quality of discourse about related issues, and have made relevant methodological progress. Labs is now working on evaluating causes and I think their current understanding has already somewhat improved the quality of discourse and had some positive expected impact on Good Ventures’ spending. The kind of story I am expecting is more long-term progress, so while I think good work will produce value along the way, I am very open to the possibility that most of the value is coming from gradual progress towards a more ambitious goal rather than improved spending this year.
  1. Many EAs have been influenced by arguments regarding astronomical waste, existential risk, shaping the character of future generations, and impacts of AI in particular. To the extent that we actually have aggregative utilitarian values I think these are hugely important considerations and that calling them to attention and achieving increased clarity on them has had a positive impact on decisions (e.g. stuff has been funded that is good and would not have been otherwise) and is an important step towards understanding what is going on. I think most of the positive impact of these lines of argument will wait until they have been clarified further and worked out more robustly.
  1. There is a lot of one-off stuff in economics and the social sciences more broadly that bears on questions about which causes are promising, even if it wasn’t directly motivated them—moreover, I think that if you were motivated by them and doing research in the social sciences or supporting research in the social sciences, you could zoom in on these most relevant questions. I’m thinking of economics that sheds light on the magnitude of the externalities from technological development, the impact of inequality, or determinants of good governance; or history that sheds light on the empirical relationship between war, technological development, economic development, population growth, moral attitudes etc.; or so on. One could potentially lump in RCT’s that shed light on the relationship between narrower interventions and more intermediate outcomes of interest. All of this stuff has a more nebulous impact on causing the modern intellectual elite to have generally more sensible views about relevant questions.

Katja: If new philanthropists wanted to contribute to this area, do you have thoughts on what they should do?

(if they wanted to spend $10,000?)

(if they wanted to spend $1M?)

Paul: If it were possible to fund GiveWell labs more narrowly that would be an attractive opportunity, and GiveWell seems like an alright bet anyway. Their main virtue as compared to others in the space is that they are on a more straightforward trajectory where they have an OK model already and can improve it marginally.

It seems like CEA has access to a good number of smart young people who are unusually keen on effectiveness per se; it seems pretty plausible to me that they will eventually be able to turn some of that into valuable research. I think they aren’t there yet (and haven’t really been trying) so this is a lot more speculative (but marginal dollars may be more needed). If it were possible to free up Nick Beckstead’s time with more dollars I would seriously consider that.

Katja: If you had money to spend on cause prioritization broadly, would it be better spent on prioritizing causes, more narrow research which informs prioritization (e.g. about long run effects of technological progress or effectiveness of bed nets), outreach, or something else? (e.g. other forms of synthesis, funding, doing good projects)

Paul: The most straightforwardly good-seeming thing to do at the moment is to bite off small questions relating to the relative promise of particular causes, and then do a solid job aggregating the empirical evidence and expert opinion to produce something that is pretty robustly useful. But there is also a lot of room for trying other things. Overall it seems like the most promising objective is building up the collective stock of knowledge that is robustly useful for making judgments between causes.

Katja: It is sometimes claimed that funders care very little about prioritization research, and so efforts are better spent on outreach than on research, which will be ignored. What do you think of this model?

Paul: I think that the number of people who might care is much larger than the number who currently do, and a primary bottleneck is that the product is not good enough. Between that and the fact that I’m quite confident there are at least a few million cause-agnostic dollars a year that seems sensitive to good arguments, I would be pretty comfortable contributing to cause prioritization. Outreach might be a better bet, but it’s certainly less certain, and my current best guess is that it’s less effective for reaching the most important people than building a more compelling product.