Intelligence Amplification Interview

Ryan Carey and I discussed intelligence amplification as an altruistic endeavor with Gwern Branwen. Here (docx) (pdf) is a summary of Gwern’s views. Also more permanently locatable on my website.

How to trade money and time

Time has a monetary value to you. That is, money and time can be traded for one another in lots of circumstances, and there are prices that you are willing to take and prices you are not. Hopefully, other things equal, the prices you are willing to take are higher than the ones you aren’t.

Sometimes people object to the claim that time has a value in terms of money, but I think this tends to be a misunderstanding, or a statement about the sacredness of time and mundanity of money. I also suspect that the feeling that time is sacred and money is in some sense not prompts people who believe that money and time can be compared in value in principle to object to actually doing it much. There are further reasons you might object to this too. For instance, perhaps having an explicit value on your time makes you feel stressed, or cold calculations make you feel impersonal, or accurate appraisals of your worth per hour make you feel arrogant or worthless.

Still I think it is good to try to be aware of the value of your time. If you have an item, and you trade it all day long, and you don’t put a consistent value on it, you will be making bad trades all over the place. Imagine if you accepted wages on a day to day basis while refusing to pay any attention to what they were. Firstly, you could do a lot better by paying attention and accepting only the higher ones. But secondly, you would quickly be a target for exploitation, and only be offered the lowest wages.

I don’t think people usually do this badly in their everyday trading off of time and money, because they do have some idea of the trades they are making, just not a clear one. But many other things go into the sense of how much you should pay to buy time in different circumstances, so I think the prices people take vary a lot when they should not. For instance, a person who would not accept a wage below $30/h will waste an hour in an airport because they don’t have internet, instead of buying wifi for $5, because they feel this is overpriced. Or they will search for ten minutes to find a place that sells drinks for $3 instead of $4, because $4 is a lot for a drink. Or they will stand in line for twenty minutes to get the surprisingly cheap and delicious lunch, and won’t think of it as being an expensive lunch now.

I agree that time is very valuable. I just disagree that you should avoid putting values on valuable things. What you don’t explicitly value, you squander.

It can be hard to think of ways that you are trading off money and time in practice. In response to a request for these, below is a list. They are intended to indicate trade-offs which might be helpful if you want to spend more money at the expense of time or vice versa in a given circumstance. Some are  written as if to suggest that you should move in one direction or the other especially - remember that you can generally move in the opposite direction also.

Continue reading

Meta-error: I like therefore I am

I like Scott’s post on what LessWrong has learned in its lifetime. In general I approve of looking back at your past misunderstandings and errors, and trying to figure out what you did wrong. This is often very hard, because it’s hard to remember or imagine what absurd thoughts (or absences of thought) could have produced your past misunderstandings. I think this is especially because nonsensical confusions and oversights tend to be less well-formed, and thus less organizable or memorable than e.g. coherent statements are.

In the spirit of understanding past errors, here is a list of errors which I think spring from a common meta-error. Some are mentioned in Scott’s post, some were mine, some are others’ (especially those who are a combination of smart and naive I think), a few are hypothetical:

  • Because I believe agent-like behavior is obviously better than randomish reactions, I assume I am an agent (debunked!).
  • Because I think it is good to be sad about the third world, and not good to be sad about not having enough vitamin B, I assume the former is why I am sad.
  • Because I explicitly feel that racism is bad, I am presumably not racist.
  • Because my mind contains a line of reasoning that suggests I should not update much against my own capabilities because I am female, presumably I do no such thing.
  • Because I have formulated this argument that it is optimal for me to think about X-risks, I assume I am motivated to (also debunked on LW).
  • Because I follow and endorse arguments against moral realism, and infer that on reflection I prefer to be a consequentialist, I assume I don’t have any strong moral feelings about incest.
  • Because I have received sufficient evidence that I should believe Y, I presumably believe Y now.
  • I don’t believe Y, and the only reason I endorse to not believe things is that you haven’t got enough evidence for them, therefore I must not have enough evidence to believe Y.
  • Because I don’t understand the social role of Christmas, I presume I don’t enjoy it (note that this is a terrible failing of the outside view: none of those people merrily opening their presents understands the social role either).
  • Because I don’t endorse the social role of drinking, I assume I don’t enjoy it.
  • Because signaling sounds bad to me, I assume I don’t do it, or at least not as much as others.
  • Because I know the cost of standing up is small (it must be, it’s so brief and painless!), this cannot be a substantial obstacle to going for a run (debunked!).
  • I know good motives are better than bad motives, so presumably I’m motivated by good motives (unlike the bad people, who are presumably confused over whether good things are the things you should choose)
  • I have determined that polyamory is a good idea and babies are a bad idea, therefore I don’t expect to feel jealousy or any inclination to procreate, in my relationships.

In general, I think the meta problem is failing to distinguish between endorsing a mental characteristic and having that characteristic. Not erroneously believing that the two are closely related, but actually just failing to notice there are two things that might not be the same.

It seems harder to make the same kind of errors with non-mental characteristics. Somehow it’s more obvious to people that saying you shouldn’t smoke is not the same as not smoking.

With mental characteristics however, you don’t know how your brain works much at all, and it’s not obvious what your beliefs and feelings are exactly. And your brain does produce explicit endorsements, so perhaps it is easy to identify those with the mental characteristics that the endorsements are closely related to. Note that explicitly recognizing this meta-error is different from it being integrated into your understanding.

Interview with Cool Earth

An interview with the people of Cool Earth, a charity I investigated and ultimately recommended as a relatively good one while visiting GWWC last summer.

New website contents

Rob and Federico of GWWC have made a nice summary of metacharities, which might be useful to those of you interested in cause prioritization as a cause. For future reference, I’ve put in my online collection of useful things (just for useful things which need help with their web presence).

I’ve also put up a few ‘structured cases’ for various claims there. These are intended to be well organized collections of arguments and considerations regarding a particular question, such as ‘should I invest in opening US borders?’. The claims of interest are mostly about how resources can be used to make the world better. The arguments are in progress, and probably not in particularly good formats, nor complete. However I talk to people about these often, so it is good to have them online to point to.

I’ve also put up some puzzles and other things.

Discussions for information

Information and ideas percolate through society in many forms: via research papers, media publications, schools, advocacy campaigns, and perhaps most ubiquitously, private conversations.

Private conversations fulfill purposes other than information processing and transfer, so they cannot be expected to perfectly fulfill such roles, or even to fulfil them particularly well. They convey implicit information about the speakers and their qualities, they manifest social maneuvering, they embody humor and other good feelings, and they make a good circumstance for enjoying company more generally.

The information related roles that conversation can play most obviously include straightforwardly communicating information, and – in a more argumentative fashion – collaboratively figuring out what is true. Even if conversations are usually for other things as well, given that they are a major part of the social information  and dispersal system, one might wonder if they could do these jobs better when those roles are important. For instance, if you partake in ‘work’ conversations with the intention of productively progressing toward specific goals, should you be doing anything other than following your natural conversation instincts?

If you wanted to fulfill these information roles with conversations, here are some ways they seem to fall short in general:

  1. They are rarely recorded or shared usefully. Which is bad because it means they must be repeated anew by many different groups of people, not to mention exactly the same people.
  2. Relatedly, conversations seem to hardly build on one another over time. If I think of a good counterargument to your point, this won’t be available to any of the gazillion others having the same argument in the near future, because neither of us will do anything that makes it so, and both of us will probably forget all but the gist of the discussion by tomorrow. I posit that it is very hard to have a counterargument to a counterargument to a counterargument to an argument widely known, even when the argument and the counterargument are often repeated. Except when participants have a history or perhaps a shared subculture, each discussion basically starts the topic anew. A given discussion rarely gets through many considerations, and doesn’t leave the considerations it gets through in a state to be built upon by other conversations.
  3. Discussions are often structured poorly for analysis, though perhaps ok for information transfer. It is natural for a discussion to take a fairly linear form, because only one sentence can be said at a time. But topics being discussed often don’t fit well in this form. A given statement has many potential counterarguments, and lots of possible pieces of supporting evidence, or vantage points from which to analyze it. The same is true of each of those arguments or supports. So a person may make a statement, then another person may offer a  criticism, and so on for a few levels, at which point one of the people will ‘lose the argument’ because he has no retort. But that corner of the tree was not necessarily critically important to the truth of the initial claim. If at the first level a different argument had been pursued, or different evidence had been offered, a largely unrelated path would have been taken, and someone else may have had the final say. If the parties do not remember the rest of the structure of their discussion, it is hard to go back to to a sensible juncture and hash out a different supporting claim. Usually they will just move on to something else that the last bit reminded them of, and in the future vaguely remember who won the argument, without further value being created. 
  4. Relatedly, it is often unclear to the participants in a discussion what the overall structure of a discussion is, or how the parts relate to one another on even a small scale. For instance, which parts are important to carrying the point, and which parts are watertight. I find that when I write out an argument at length, in a structured way, I notice gaps that weren’t salient informally. And structured arguments look as if they help students reason more clearly.
  5. Disagreement interacts badly with the social signaling purposes of conversation, as it tends to be taken as an aggressive move except when done skillfully. It’s not clear to me whether this is a fundamental problem with collaboratively critiquing ideas or a accident of the social norms we have, both for critique and for attacking people.
  6. Similarly, allocating time in conversations tends to interact badly with social signaling. The person with the best point to listen to next is unlikely to always be the one who should talk next according to fairness, kindness, status, or volume. There have been some attempts to improve this.

This is not near exhaustive. Feel free to suggest more.

I am told that people have often tried to improve conversational norms, but I only know of a few such efforts. These are innovations such as randomized alarms while talking, hand gestures during seminars, argument mapping, and anonymous text conversation while everyone is in the room. I’d like to see a better survey of such attempts, but so far have not. Hopefully I just haven’t guessed the right keywords. Pointers would be appreciated.

For figuring out what is true, it seems many of these problems are resolved by writing a more permanent, public, well structured, outline of a topic of debate, then adding to it as you find new arguments. I think various people are in favor of such a thing, but I haven’t seen it done much (again, pointers appreciated). I have tried this a bit, with Paul Christiano, with the hope that it will either help somewhat, or allow us to figure out the problems with it.

Here are a few examples in progress:

  1. The case for Cool Earth (linked before from my climate research)
  2. US open borders advocacy
  3. Animal activism

So far we don’t seem to have found any irrecoverable problems. However I’m curious to hear your thoughts on the merits of such interventions. Especially if you think this sort of thing is a terrible aid to discussion for the purpose of figuring things out.

In praise of pretending to really try

Ben Kuhn makes some reasonable criticisms of the Effective Altruism movement. His central claim is that in the dichotomy of ‘really trying’ vs. ‘pretending to try’, EAs ‘pretend to really try’.

To be explicit, I understand these terms as follows:

‘Really trying’: directing all of your effort toward actions that you believe have the highest expected value in terms of the relevant goals

Pretending to try‘: choosing actions with the intention of giving observers the impression that you are trying.

Pretending to really try‘: choosing actions with the intention of giving observers the impression that you are trying, where the observers’ standards for identifying ‘trying’ are geared toward a ‘really trying’ model. e.g. they ask whether you are really putting in effort, and whether you are doing what should have highest expected value from your point of view.

Note the normative connotations. ‘Really trying’ is good, ‘pretending to try’ is not, and ‘pretending to really try’ is hypocritical, so better than being straight out bad, but sullied by the inconsistency.

I claim Effective Altruism should not shy away from pretending to try. It should strive to pretend to really try more convincingly, rather than striving to really try.

Why is this? Because Effective Altruism is a community, and the thing communities do well is modulating individual behavior through interactions with others in the community. Most actions a person takes as a result of being part of a community are pretty much going to be ‘pretending to try’ by construction. And such actions are worth having. If they are discouraged, the alternative will not be really trying. And pretending to try well is almost as good as really trying anyway.

Actions taken as a result of being in a community will be selected for being visible, because visible actions are the ones you will be able to pick up from others in the community. This doesn’t necessarily mean you are only pretending to try – it will just happen to look like pretending to try. But actions will also probably be visible, because by assumption you are driven to act in part by your community membership, and most ways such motivation can happen involve a possibility of others in the community being aware of your actions.

Many actions actions in communities are chosen because others are doing them, or others will approve of them, or because that’s what it seemed like a good community member would do, not because they were calculated to be best. And these dynamics help communities work and achieve their goals.

Of course people who were really trying would do better than those  who were only pretending to try. As long as they could determine what a person who was pretending would do, a person who really tries could just copy them where it is useful to have pretending-to-try type behaviors. But there is no option to make everyone into such zealous tryers that they can do as well as the effective altruism movement without the social motivations. The available options are just those of what to encourage and what to criticize. And people who are pretending to really try are hugely valuable, and should not be shamed and discouraged away.

A community of people not motivated by others seeing and appreciating their behavior, not concerned for whether they look like a real community member, and not modeling their behavior on the visible aspects of others’ behavior in the community would generally not be much of a community, and I think would do less well at pursuing their shared goals.

I don’t mean to say that ‘really trying’ is bad, or not a good goal for an individual person. But it is a hard goal for a community to usefully and truthfully have for many of its members, when so much of its power relies on people watching their neighbors and working to fit in. ‘Really trying’ is also a very hard thing to encourage others to do. If people heed your call to ‘really try’ and do the ‘really trying’ things you suggest, this will have been motivated by your criticisms, so seems more like a better quality of pretending to really try, than really trying itself. Unless your social pressure somehow pressured them to stop being motivated by social pressure.

I also fear that pretending to try, to various extents, is underestimated because we like to judge other people for their motives. A dedicated pretender, who feels socially compelled to integrate cutting edge behavioral recommendations into their pretense can be consequentially very valuable, regardless of their virtue on ethical accounts.

So I have argued that pretending to try is both inevitable and useful for communities. I think also that pretending to really try can do almost as well as really trying, as long as someone puts enough effort into identifying chinks in the mask. In the past, people could pretend to try just by giving some money to charity; but after it has been pointed out enough times that this won’t achieve their purported goals,  they have to pick up their act if they want anyone (themselves included) to believe they are trying. I think a lot of progress can come from improving the requirements one must meet to look like one is trying. This means pointing out concrete things that ‘really trying’ people would be doing, and it hopefully leads to pretending to really try, and then pretending to really really try. Honing the pretense, and making sure everyone knows what the current standards are, are important goals in a community with goals that might be forgotten if we undervalue pretending to really try.

Ethical Intuitionism, part 1

I read Michael Huemer’s book Ethical Intuitionism at Bryan Caplan’s suggestion. Before reading it I thought that Ethical Intuitionism seemed an unlikely position and that Bryan and Michael seemed like smart guys, so I hoped I might be persuaded to significantly change my mind. I still believe both of the premises, but did not get as far as changing my mind, so here I’ll report back on some of my reasons. I actually read it a while ago, and am reconstructing some of this post from memories and scribbles in margins, so apologies if this causes any inaccurate criticism.

Ethical intuitionism is the position that there exist real, irreducible, moral properties, and that these can be known about through intuition. i.e. some things are ‘good’ or ‘right’ independent of anyone’s feelings or preferences, and you can learn about this through finding such ideas in your head. I found Huemer’s case for it well written, thought provoking, and for the most part valid. However I thought it had some problems, and was overall not compelling.

Huemer divides metaethics as into five types of theories then argues against non-cognitivism, relativism, nihilism, and naturalism. The last theory standing is intuitionism, which Humer goes on to defend from the criticisms it has been attacked with before. Note that these metaethical theories are both theories of what ethics is and theories of how we can know about it.

I

The first problem is related to intuitionism being primarily a theory of how we can know about ethics (through intuitions), while being hazier on what exactly ethics is, in terms of anything else. It’s something real that we can learn about through our intuitions. It is normative. If the theory contained any more concrete specification of the nature of ethics or how it got there, I think it would become more obviously subject to the same criticisms made of other metaethical theories. For instance, this is one of Huemer’s criticisms of subjectivism:

Fourth, consider the question, why do I approve of the things I approve of? If there is some reason why I approve of things, then it would seem that that reason, and not the mere fact of my approval, explains why they are good. If I approve of x because of some feature that makes it desirable, admirable, etc.,  in some respect, then x’s desirability (etc.) would be an evaluative fact existing prior to my approval. On the other hand, if I approve of x for no reason, or for some reason that does not show x to be desirable (etc.) in any respect, then my approval is merely arbitrary. And why would someone’s arbitrarily approving of something render that something good?

This argument does not seem specific to subjectivism. Take any purported source of morality S. If there is some reason for S justifying what it does, one could similarly say that that reason, and not the mere fact of being allowed by S, explains why they are good. And if S has no reason, then it is arbitrary. And why would arbitrarily being allowed by S render something good?

If we were clearer on the origins of goodness in the intuitionist account, doesn’t it fall prey to one side or the other of the same argument? This argument appears to rule out all sources of goodness.

Perhaps the difference with the intuitionist account here is that it is ok for it to be arbitrary, because while goodness is arbitrary, it isn’t caused by some other arbitrary thing. You can answer ‘why would arbitrarily being good render something good?’, by pointing out that this is logically necessary, whereas on the other accounts you are seeking to equate two things that are not already identical.

On the intuitionist account you are also trying to equate two non-identical things I think – I have merely labeled them ambiguously above. On the one hand, you have ‘goodness’ which is a property you receive statements about via your intuition, and on the other you have ‘goodness’ which implies you should do certain things. If these were not distinguishable, there would be no debate. But it could still be that the goodness which means you should do certain things is primary, and causes the goodness which you observe. Then the former goodness would be arbitrary, but nothing else arbitrary would ‘make’ good things good.

So perhaps the argument for intuitionism is that arbitrariness alone is ok, but it is not ok for a thing to be caused by other arbitrary things. Or perhaps only moral things shouldn’t be caused by other arbitrary things? Or only goodness? I’m not sure how these things work, but this seems very arbitrary, and I have no distinguished way to decide which things should be arbitrary and which things principled.

II

Huemer argues that we should not disregard ‘intuitions’ as vague, untrustworthy, or unscientific. He points out that most of our thoughts and beliefs are built out of intuitions at some level. Our scientific beliefs rest on reasoning that is ultimately justified by a bunch of stuff turning up in our heads – visual perceptions, logical rules, senses of explanatory satisfaction – and us feeling a strong inclination to trust it.

I think this is a good and correct point. However I don’t think it supports his position. On this conceptualization of things, non-intuitionists also reach their ethical judgements based on intuitions. They just take into account a much wider array of intuitions, and build more complicated inferential structures out of them, instead of relying solely on intuitions that directly speak of normativity.  For instance, they have intuitions about causality, and logical inference, and their perceptions, and people. And these intuitions often cause them to accept the claim that people evolved. And they have further intuitions about how disembodied moral truths might behave which make it hard for them to imagine how these would affect the evolution of humans. And they infer that such moral truths do not explain their observed mental characteristics, based on more intuitions about inference and likely states of affairs. And people do this because they find such methods more intuitive than the isolated moral intuitions.

So it seems to me that Huemer needs an argument not for intuitions, but for using very direct, isolated intuitions instead of indirect chains of them. And arguments for this more direct intuitionism seem hard to come by. At the outset, it seems to undermine itself, as it is clearly not what most people find intuitive, and so appears to need support from more complicated constructions of intuitions, if it is to become popular. However this is not a very fatal undermining. Also, indirect inferences from many intuitions woven together have been very fruitful in non-moral arenas, though whether better than direct intuitions is debatable.

In sum, Huemer makes a good case that everything is made of intuitions, but this doesn’t seem to normatively support using short chains of intuitions over the longer, more complex chains that persuade us for instance that if there was an ethical reality out there, there is no reason it would interfere with our evolution such as to instill knowledge of it in our minds.

III

One might argue that Huemer often begged the question, for instance claiming that other ethical theories are wrong because the ethical consequences are contrary to our ethical intuitions. This is perhaps an unsympathetic interpretation – usually he just says something like ‘but X is obviously not good’ – however, I’m not sure how better to interpret this. Since all thoughts can be categorized as intuitions, it would be hard not to use some sort of intuitions to support an intuitionist conclusion. However given the restatement of his position as a narrower ‘direct intuitionism’, it would be nice to at least not use just those kinds of direct intuitions as the measure.

***

The issues of how evolved creatures would come to know of moral truths, or how moral truths would have any physical effect on the world at all, seem like big problems with intuitionism to me. Huemer addresses them briefly, but doesn’t go into enough depth to satisfactorily resolve these problems I think.  I mean to write about this issue in another post.

So in sum, ethical intuitionism seems subject to similar criticisms as other metaethical theories, but avoids them by being nonspecific or begging the question. ‘What do my intuitions say?’ seems like an unfair criterion for correct ethical choices in a contest between intuitionism and other theories. The arguments for intuitions being broadly important and trustworthy do not support this thesis of intuitionism, which is about relying on isolated immediate intuitions over more indirect constructions of intuitions. A vague case can be made for moral intuitions evolving, but I doubt it works in detail, though I have also not yet detailed any argument for this.

Research on climate organizations

Burning rainforest in Brazil

Destruction of tropical rainforest, as prevented by Cool Earth (Photo credit: Wikipedia)

 

Here is the second post summarizing some of the climate change research I did at CEA last summer (the first is here). Below are links to outlines of the reasoning behind a few of the estimates there (they are the ‘structured cases’ mentioned in the GWWC post).

The organizations were investigated to very different levels of detail. This is why Sandbag for instance comes out looking quite cost effective, but is not the recommendation. I basically laid out the argument they gave, but had almost no time to investigate it. Adding details to such estimates seems to reliably worsen their apparent cost-effectiveness a lot, so it is not very surprising if something looks very cost effective at first glance.

The Cool Earth case is the most detailed, though most of the details are rough guesses at much more detailed things. The cases are designed to allow easy amendment if more details or angles on the same details are forthcoming.

As a side note, I don’t think GWWC plans on more climate change research soon, but if anyone else is interested in such research, I’d be happy to hand over some useful (and neatly organized!) bits and pieces.

Cool Earth

Solar Aid

Sandbag

Probabilistic self-defeat arguments

Alvin Plantinga‘s ‘evolutionary argument against naturalism(EAAN) goes like this:

  1. If humans were created by natural selection, and also not under the guidance of a creator (‘naturalism’), then (for various reasons he gives) the probability that their beliefs are accurate is low.
  2. Therefore believing in natural selection and naturalism should lead a reasonable person to abandon these beliefs (among others). i.e. belief in naturalism and natural selection is self defeating.
  3. Therefore a reasonable person should not believe in naturalism and natural selection.
  4. Naturalism is generally taken to imply natural selection, so a reasonable person should not believe in naturalism.

This has been attacked from many directions. I agree with others that what I have called point 1 is dubious. However even on accepting it, it seems to me the argument fails.

Let us break down the space of possibilities under consideration into:

  • A: N&T: naturalism and true beliefs
  • B: N&F: naturalism and false beliefs
  • C: G&T: God and true beliefs

The EAAN says conditional on N, B is more likely than A, and infers from this that one cannot believe in either A or B, since both are included in N.

But there is no obvious reason to lump A and B together. Why not lump B and C together? Suppose we believe ‘natural selection has not produced true beliefs in us’. Then either natural selection has produced false beliefs, or God has produced true beliefs. If we don’t assign very high credence to the latter relative to the former, then we have a version of the EAAN that contradicts its earlier incarnation: ‘natural selection has not produced true beliefs in us’ is self-defeating. So we must believe that natural selection has produced true beliefs in us*.

What if we do assign a very high credence to C over B? It seems we can just break C up into smaller parts, and defeat them one at a time. B is more likely than C&D, where D = “I roll 1 on my n sided dice”, for some value of n. So consider the belief “B or C&D”. This is self-defeating. As it would seem “B or C&E” is, where E is “I roll 2 on my n sided dice”. And so on.

By this reasoning, if there is any possible world that is self-defeating, pretty much any other possible world can be sucked into the defeat. This depends a bit on the details about how unlikely reliable beliefs must be for belief in that situation to be self-defeating, and how the space can be broken up. But generally, this reasoning allows the self-defeatingness of any state of affairs to contaminate any other state of affairs that can be placed in disjunction with it, and that can be broken into states of affairs not much more probable than it.

It seems to me that any reasoning with this property must be faulty. So I suggest probabilistic self-defeat arguments of this form can’t work in general.

It could be that Plantinga means to make a stronger argument, for instance ‘there is no set of beliefs consistent with naturalism under which one’s beliefs have high probability’, but this seems like quite a hard argument to make. I could place a high probability on A for instance.

It could also be that Plantinga means to use further assumptions that make a distinction between grouping A and B together and grouping B and C together. One possibility is that it is important that N is a cause of T or F, but this seems both ad-hoc and possible to get around. At any rate, Plantinga doesn’t seem to articulate further assumptions in the account of his argument that I read, so his argument seems unlikely to be correct as it stands, all other criticisms aside.

*Note that if you wanted to turn the argument against creationism, it seems you could also just expand the space to include creators who don’t produce true beliefs, and  depending on probabilities, use this to defeat the belief in a creator, including one who does produce true beliefs.