Ernie and I have exchanged a few emails in addition to our recent posts, trying to figure out a better way to proceed that is less prone to the difficulties that we seem to have been experiencing. We decided we needed to shorten the feedback loop so that we can clarify things more quickly when something is not making sense to either one of us. On pretty short notice (about three hours) we decided to have weekly IM chats and we just finished the first one. Ernie whipped up a little program to convert the transcripts of these sessions for inclusion on his blog, so you can read our first conversation in Chatalogue: True Good. I think we did pretty well for an hour of typing, even if the transcript is occasionally hard to follow when we were typing simultaneously. Just ignore my typos.
A: I'm here!
E: G'day
E: Did you see my transcribed version?
A: Yes, I did. Nice work.
E: Are you okay with me blogging the transcript like that?
A: We'll see. I guess a chat has a bit different flavor than normal written stuff. It just seems a bit weird to know that everything I am saying is being transcribed for posterity.
A: But, onward!
E: So, shall we start with a bit of meta-discussion, to see if we can get onto the same page?
A: Sure
E: My hope is that, rather than trying to prove each other wrong, we can focus more on trying to build a common understanding.
A: My difficulty is, while I agree in principle, I am not sure what it means to build a common understanding about something that we (may) have fairly fundamental disagreements.
E: Well, ideally we could at least figure out *where* we disagree!
A: I mean, we can agree about things at some meta-level, but I am not sure that will be very satisfying.
E: The funny thing is, we actually seem to agree on the vast majority of facts.
A: Vast majority? Not sure how to count that. Some, many?
E: We both believe in the scientific method, a generally Western set of ethics, a historical critique of the Bible. etc.
E: I have a hard time finding a "point of fact" where we've had serious disagreement.
A: That's still a pretty big tent.
E: From where I sit, it is mostly a matter of "interpretation" and conclusion where our communication breaks down.
E: I also think that, at least in theory, we respect the same rules of logic (even if we sometimes fall short in practice).
A: Sure
E: For example, I presume you agreed with the bulk of logical tools employed by Alonzo in "A Better Place"
A: Yes. Although I had intended to re-read it before we discussed it, and I haven't yet. (Thanks for reminding me.)
E: Of course, that begs the question: if we have so much in common, why do we seem to miss each other's meaning so often?
E: Is it intellectual, emotional, semantic, or purely a communication gap?
A: I am not sure how to classify it. Sometimes it just seems like things that you think are important to critical issues (trilemma, anger, etc.) just don't seem that "helpful" to me.
E: Yeah, I realize that.
E: It is entirely possible that our differences are (at least in part) a matter of priority.
E: I see theism as solving a host of formal and practical problems that you may not consider important and relevant.
A: Hold on a second.
E: ok
A: I get that impression from what you right, but usually it seems to me more like a "if theism were true, it would solve this problem this way", but it doesn't really provide reasons to believe that theism is true.
A: what you *write*, not right
E: Right, which brings us back to the epistemic issues.
A: How is what you are saying not an argument from consequences?
E: It is more an operational definition.
E: I'm still not even sure what you mean by 'true' in this context.
A: I'm speaking a bit loosely, of course. Because we're chatting.
E: Or if you're still thinking of "truth" as a boolean yes/no.
A: vs. a probabalistic sense of truth?
E: Um, not exactly.
E: More like a fuzzy logic sense of truth values.
A: okay
E: The way we can say "Newtonian mechanics is true, but less true than Quantum Mechanics"
A: right
E: From my point of view, Christian theism is a succesful "theory" which explains certain facts very well, but also has many facts which (as currently formulated) it doesn't explain.
A: But from what I have seen, some of the "facts" that you say it explains are not really "facts" at all.
E: Okay, then we get to definition of "facts" :-)
A: For instance, the statement about Christianity being responsible the success of Western civilization.
E: That was a terminology problem, I later realized (after reading another post of yours).
E: I was using Toynbee's classification, and referring to Western Christendom as founded by Charlemagne.
E: My bad -- I should've been more precise.
A: So, Christianity was responsible for Western Christendom? How is that helpful?
E: It is really hard to build a civilization. There's only 22 or so that Toynbee was able to catalogue.
E: There's great diversity, of course, but the "creative minority" who developed those civilization had to find beliefs powerful enough to grow and maintain society.
E: Christianity is hardly unique in this fashion, but it implies they were onto "something."
A: Some people have referred to that sort of thing as "belief in belief".
E: Sure -- but not every belief in belief works.
A: Sure. But the beliefs do not have to be true to work either.
E: Not absolutely true, but at least relatively true.
E: And a "truer belief" must work better than the "false belief" it replaces, no?
A: I've been reading a book by the anthropologist Scott Atran. He goes so far as to say that no society has survived without certain kinds of shared beliefs and rituals, but that they all contain counter-factual, quasi-propositional elements.
E: Sure.
E: So does science.
A: In fact, the "cost" of believing falsehoods is part of what makes them work.
E: Newtonian physics was dead-wrong about action at a distance.
E: Quantum physics has random infinities we just ignore.
E: We know it can't explain gravity.
E: I'm not sure I understand or buy the "cost" part, though...
A: I am not sure I can describe it well enough, quickly enough, but the basic idea is that people tend to trust people who have demonstrated a willingness to participate in expensive behaviors on behalf of their society.
E: Sure.
A: But he develops it much better than that.
A: I have been meaning to blog about it. He has a number of very interesting quotes that relate to various parts of our conversation.
E: Well, then that raises another question: do believe that virtue is rational, and that truth *always* ultimately supports virtue?
E: Like Alonzo tries to prove.
A: I guess I wouldn't word it like that, but yes, more or less.
E: So, I'm confused. Are you saying:
E: a) It is always better to believe the truth.
E: or
E: b) Ethical behavior is contingent on believing a shared group falsehood
A: That's a very interesting question, isn't it?
E: Which is true, or which you're saying? :-)
A: I lean toward ( a ), but I have to admit the possibility that successful societies have been based on ( b ).
A: Note that "successful" is not the same as "ethical"!
E: Isn't it?
E: I thought that this was how Alonzo defined "ethics", and fulfilling societally contructed desires.
E: as fulfilling.
A: A successful society is one that continues. That does not imply that all members of that society have their desires met.
E: Sure, but most definitions of society assume that the individuals see their survival as tied to that of their society.
A: Again, survival is not their only desire.
E: Sure, but in the absence of survival, what other desires can be met?
E: Isn't it at least a prerequisite?
A: It may be the strongest desire, but not the only one. So, yes, it is a prerequisite, but not sufficient.
E: Sure.
E: Let me try to summarize.
E: At least, as best i understand Alonzo.
E: 1) "Good behavior" is that which maximizes desire fulfillment within a society
E: 2) One of the baseline desires of a society is for its own survival
E: 3) Therefore, a set of behaviors "X" that improves a societies chance of survival is, by definition, better than a comparable set "Y" that decreases those chances.
E: Are those all true statements?
A: Sorry... first try didn't work.
A: I think we need to be careful not to conflate the desires of individuals with desires of a "society"
E: Is not a "society" merely the aggregate of its individual desires?
A: If we define it that way, but I think we have to be careful of equivocation.
E: I would love to see a formal definition, as I don't recall one from "A Better Place" (ABP)
A: For instance, when we talk about the survival of a society, are we talking about just the survival of the aggregate of the individuals' desires?
A: When I hear "survival of a society" I think of survival of the various structural elements. Take the example of a country ruled by a despotic line of rulers.
A: The individuals in that country may not be having their desires met, but perhaps the structure persists.
E: That's an excellent point. What exactly does "ethical behavior" mean when the lawgivers are corrupt?
A: That is "surviving" but not ethical.
E: Is murdering the kings tax collectors an ethical behavior?
E: I thought ABP defined ethics relative to the society's institutions and cultural norms.
A: I would say that the society's institutions and cultural norms affect people's desires, so they are reflected in ethics in that sense.
E: I think this is one of those areas where my understanding of Alonzo is murky.
E: Morality is defined relative to a "network of desires", right?
A: Yes.
E: Okay, *which* network? My local community? My society? My progeny?
A: I agree that he is not entirely clear about that.
A: But, I think it is partly the extent to which are actions affect others' desires.
A: What is the "reach" in the network?
E: But, can't one pretty much get almost answer you want simply by choosing the appropriate network?
E: almost any answer
A: As an individual, I am not a member of any possible network.
A: I am a member of a particular network.
A: (with fuzzy boundaries, perhaps)
E: Multiple fuzzy boundaries, with unbounded scope, no?
E: The Carbon you emit could change whether a Chinese power plant gets built, no?
A: Sure. So in that case, the appropriate network is a global one.
E: So, in the general case, is morality defined relative to the global network of all humans who are currently living?
A: Yes.
E: (even if the coupling of certain actions is quite weak)
E: And might live in the future?
E: Or only to the extent some of us alive today happen to care about our progeny?
A: We get into difficulties here because there are practical problems in "evaluating the metric".
E: Um, yeah.
E: That's the problem I have with Alonzo's whole approach.
E: It seems to work fine as a *descriptive* model of what we mean, but it seems to fall apart into uncountable sums as soon as you try to use *prescriptively*.
A: I think there are some difficult cases, and some cases that are not so difficult.
E: But the "easy" cases all seem to rely on ad hoc assumptions.
E: I agree it "might" be true -- in fact, at some level I think DU is true -- but without knowing its boundary conditions, it seems impossible to make any sort of valid claims.
A: I understand where you are coming from there.
E: Thank you.
A: I can imagine setting up a network of desires and "solving" it.
A: But the complete problem appears intractable.
E: Well, not necessarily.
E: Let us define "NOD" as a the global network of desires.
E: The goal is optimize the "most and strongest" of the desires in NOD, right?
A: Right.
A: But remember, that we are dealing with *mutable* desires.
E: The question is, is there some structure in NOD which allows us to reduce the N! weightings to a calculable heuristic.
A: (That is, some of the desires are mutable.)
E: Right. I believe one of the assumptions we need to make is that desires exist with some well-defined distribution.
E: That is, the *important* desires are not fully mutable, but need to fall within some well-defined range.
E: (at least in principle, even if we don't know what that range is)
A: Right. We can take a "statistical mechanics" like approach to the problem without pretending to solve for every individual variable.
E: Exactly.
E: But, that requires us to assume that:
E: a) there are meaningful aggregate metrics that we can discern
E: b) it is possible, in principle, to maximize them
E: c) it is worth the effort to attempt to discover them
E: If the problem is underconstrained, then we get a nice relative world where we can pretty much construct our own morality.
E: If it is overconstrained, then we (or some group) is screwed. :-(
E: Make sense?
A: Yep.
E: So, here's the funny thing.
E: DU only seems to be well-defined and useful if all these other assumptions are true.
E: Otherwise, it is just a post hoc rationalization of what we (or society) have already decided is true based on other considerations.
E: (at least if we're talking about "good" in the moral sense, not merely "good for a particular purpose)
A: I'm not sure that's true.
E: Can you give a counter-example?
A: Let me think a second... still trying to decide.
E: Actually, it is almost 6 pm.
E: Do you want to take that as homework? :-)
A: We can pick up there next week.
E: Okay.
E: So, shall I post this?
A: That's fine. I'll probably post a link to it after you've done that.
E: Fair enough.
E: If I have time, I'll try to clean up the argument.
E: At any rate, I think this was *way* better than our previous exchanges. :-)
A: Yes, definitely.
E: Thanks for suggesting direct conversation.
E: Have a good week.
A: Same time next week? Or coordinate later?
E: Same time, same place.
A: OK. Thanks, Ernie.
E: Thank you!
E: Bye.
A: bye