The Fallacies of Victor Reppert's Composition

by: Ben Schuldt


I've found Victor Reppert's "Reply to Carrier" post in three places (here, here, and here) and closely reviewed the version that was most recent. I will also be responding to Bill Vallicella's comments as we go along.

Reppert says:

So the question one needs to ask in response to my book is not 'Does Reppert conclusively refute naturalism?' Rather, one should ask 'Has Reppert shown some promising ideas for criticizing naturalism that give reasonable people a good reason to reject it?'

Reppert is of course correct that Richard Carrier (and myself, for instance) would answer both questions with a "no." While I believe what Carrier has done from his perspective is to set up the course in possibly its most thorough conception (which is fair from one perspective), that doesn't mean we can't for the sake of argument set the list of technicalities aside and address the issue on the terms presented from another perspective. It fairs no better in my opinion. As we will soon see, Reppert and company have a philosophical and conceptual problem with understanding all the evidence for physicalism that the world could ever hope to present and which inhibits further discussion.

As Reppert says:

So I maintain that there is a logico-conceptual chasm between the various elements of reason, and the material world as understood mechanistically. Bridging the chasm isn’t going to simply be a matter of exploring the territory on one side of the chasm.

Well, we're going to have to do some exploring and I'll be explaining why as we go along.

Three Underlying Problems with Carrier's Critique and Two Errors Physicalists Make?

Reppert's reason for making a response is to reiterate that the AfR does in fact have promise and he claims that there are three underlying problems with Carrier's critique:

Problem 1: Though Carrier accuses Reppert of what Carrier calls a "possibility fallacy," both Reppert and Darek Barefoot contend the merit of the argument from reason (AfR) is due to a logical impossibility that is identifiable on the spot regardless of endless further research into "possible" naturalistic explanations. Hence the noted failures of various naturalistic explanations that have been addressed are secondary and incidental to the primary AfR.

I don't have much to say here. I acknowledge that AfR advocates formulate their arguments the way they do even though I believe this is unfair to naturalism. This can be illustrated better by moving on to the other issues.

Problem 2: Reppert says that Carrier has misrepresented his argument and CSL's argument as a "causation fallacy." Reppert tries to clarify with the following:

If a physical account of the process is causally complete, and physics is mechanistic, how do reasons come into play?
If reasons are mechanical procedures, then they are by definition already a part of the loop. Reppert and Barefoot are guilty of assuming their conclusion with sentiments like these:

...we can appreciate that the blind functions of a computer have been so arranged as to accomplish a rational purpose only because, unlike the computer, we possess genuine rationality. [emphasis mine]
What is the physicalist to do when Reppert and Barefoot, when presented with proof of concept, arbitrarily reject it? Isn't the whole argument about whether or not mechanical reasoning is "genuine" reason? Are we to take this as an "argument from appreciation" now or perhaps an argument from materialistic depreciation?

Problem 3: Reppert says that Carrier erroneously accuses him of an "armchair science fallacy" since he clearly states in his book the conditions that would have to be met. Reppert also claims that Carrier failed to meet them: simply won’t do for Carrier to merely assert that there’s all this philosophical and scientific literature out there. He needs to show evidence that these analyses of mind don’t commit the two errors listed... [emphasis mine, for organizational purposes]
Of course, Carrier did actually do his job in addition to pointing to all the literature that would have already done it for him. We will see why there is a difference of opinion on the former, shortly.

Reppert insists:

Carrier’s own treatment of intentionality is repeatedly guilty of the first [error]...
Reppert says Error 1 is:

...naturalistic analyses of mind 'invariably fail,' largely because they 'sneak in' the very concepts they are trying to explain through the back door.
Reppert says Error 2 is:

They also tend to re-describe what they are trying to explain in terms that will make such things as consciousness and reasoning more tractable to naturalistic analysis, but this produces what I call a 'subtle changing of the subject.' Instead of explaining their subject matter, they explain it away.
I think it is helpful to identify three possible levels of explanation of mental processes. The first level is in normative terms that people are used to engaging in. The second level would be in terms of appealing partially to what is already known (i.e. the first level) and the breaking down another aspect into a less mental, more technical format. And the third level I would identify would be the incredibly tedious explanation of all of it in terms that would allow a computer engineer to literally build a fully analogous artificial intelligence. Perhaps we could add in some other levels of articulation, but these three will do for now.

As I see it, the first level is neutral in regards to the argument. Average people say they think this or that without really qualifying what it is they mean in regards to what the ontology of thought is one way or the other. However it seems the appearances of this level are the primary means that AfR advocates go to town disregarding levels 2 and 3. I would consider normative discussions on thought to be the necessary abbreviated terms people live with (that they experience their mind, speak in, and take for granted on a daily basis) because to be as complicated as it would take at level 3, would be a dramatic waste of mental experience and time. Humans are already prone to computational errors and do not do better when the task manager starts filling up more. Increasing that processing by many orders of magnitude for even the most rudimentary thought would likely be a path evolution would end up avoiding. Successful thinkers that need to get rather sophisticated conclusions on the table for actual use in life could easily be imagined to out breed "over-thinkers" who never manage to actually christen a functional conclusion (in other words, evolution doesn't care about your "infinite regress" problem). It's obvious why we don't process thought from subatomic particles up and I seriously doubt robots will ever have to deal with that either.

What we find on level 2 is that though each piece of the puzzle can be explained in terms compatible with human mind and computers (and the analogies are endless and second nature for some of us), we're always leaning (what Reppert would call "sneaking in") at least partially on the familiar to do the job (i.e. level one). This creates the moving cups game of definition where one thing is explained and then AfR advocates bizarrely concede it, but back themselves into an another argument (like an argument from consciousness, an argument from experience, or an argument from appreciation) somehow imagining at the same time that they have not actually conceded the part they did concede. They do so with any one item, but fail to notice that all the items have actually been explained. They just don't put the picture together and fail to notice that both of the supposed errors presented above merely beg the question from their perspective. It seems they merely assume the concepts are being snuck in (and find it whether it is there or not) or that the subject is being changed when things are explained mechanically. What if the subject isn't being changed? How would we know from anything Reppert has told us here? "Since reason is magic, the magic is being snuck in with the explanation or when we attempt to explain the magic we are really changing the subject." How do we know it's magic, Reppert?

By the time we get to the third level, we are either being rather unfair since we aren't yet ready to build fully analogous A. I., we aren't being fair by not accepting where we are already at (and have been at for many years) in regards to explaining many pieces to the puzzle, and/or we are being unfair by expecting ten, thick technical volumes on fully analogous A. I. to make as much sense to us as level one does:

When we consider seriously what reasoning is, when we reject all attempts at “bait and switch” in which reasoning is re-described in a way that makes it scientifically tractable but also unrecognizable in the final analysis as reasoning, we find something that looks for all the world to be radically resistant to physicalistic analysis.
Maybe Bill Gates can recognize Windows Vista in any abstract physical incarnation it takes, but most of the rest of us would be scratching our heads at how in the world all those zillions of lines of code miraculously translate into our desktop experience. It suffices to say that AfR advocates have all the wrong expectations of what physicalism should entail if they ever got their minds on it.

So in review, error 1 entails explanatory unfairness as though it is reasonable to communicate at level two without appealing to some level one concepts. AfR advocates shoot themselves in the foot by conceding the argument only to re-assert their conclusion because some other level 1 idea was along for the ride to help out. "I could understand how a computer could account for x, but you've poisoned the discussion with y!" Of the the discussion on y is "poisoned" with the mere allusion to z and so on and so forth. Error 2 is a matter of physicalists being guilty of doing their job. If physicalism is true, the end result should look unrecognizable from the standpoint of level one (i.e. superficially) and AfR advocates have the wrong expectations. Both supposed errors on the part of physicalists are the result of AfR advocates assuming their own conclusion (and we can maybe throw a pinch of "lack of imagination" in there, too), in my opinion.

Incoherence with a Capital "G." That's Right. Gincoherence:

Reppert says:

He [Carrier] spends little energy trying to show that even if naturalistic explanations of reason are unsuccessful, offering a theistic account of the phenomenon of reason does nothing to alleviate our ignorance. He instead maintains, not that I have overestimated the power of theistic explanations, but that I have underestimated that power of naturalistic explanations. (Of course one can argue both that naturalistic explanations are adequate and that theistic explanations are inadequate, but neither Carrier nor Parsons actually do both.)
So what? But why not? We can certainly take a moment to ensure we've done both here. From my perspective, if we are scratching our heads at how incoherent naturalistic explanations seem to be, we should probably also be scratching our heads even more by this "horizon broadening" concept from AfR advocates. Mainly this is because (as I've said elsewhere), labeling reason "magic" is effectively saying there isn't an explanation at all. When we ask any questions about God's Reason with a capital R we don't exactly get any answers. It just "is." How informative is that? Magic entity x has magic property y and magically uses its other random magic properties (that also don't have any explanation even in principle) to create other beings with magic property y. That appears to be the whole story. I don't know how you write entire books about that since apparently there's nothing to expound upon even if you had better access to God. God shows up in the lab just to say, "Yup, it's magic." Feel free to correct me with a list of deeper insights into Reason I've somehow missed. Currently I honestly don't see it and similarly to how the AfR advocates don't need to go looking for other naturalistic explanations to know there aren't any, it doesn't seem I need to hold my breath on this count either.

As just one comparison, computer scientists will have literally tons of work ahead of them getting the incredibly sophisticated list of mechanical procedures for an A. I. machine just right. Further, much in the exact opposite direction to Reppert and company, God appears to be a conceptual contradiction in terms. He has all sorts of sophisticated properties and abilities and yet isn't reducible in nature. I await the latest "married bachelor" caliber explanation or heterodox version of theism that most of Christianity will have to reject. If you don't think your conception of "God' has any dynamic "moving" parts, it is hard to understand how any aspect of its essence is analogous to thought or any kind of mental activity (especially since these are all temporal concepts that can't apply to main stream theism). God sounds a lot like an explicitly complicated simple thing which is an unbridgeable conceptual gulf in my own mind (or just a thing that can't possibly be anything like a mind). If one does not think an absolutely simple, atemporal, immaterial mental anything has some conceptual problems (to name just a few of those kinds of issues), there's something wrong with your thinking.

This applies to Reason with the capital R as well as with God's fundamental nature itself, since reason strikes me as rather sophisticated in and of itself. It would be difficult to inform someone that spoke a different language what the code word for reason in English was without going into some fairly elaborate exposition. Not a simple task at all and it reflects the dynamics it is trying to explain. Reason is such a complicated process of belief ratification and justification, that calling anything about it simple is beyond my powers of comprehension. The fact we can't justify using it (which is asking for a reason to use reason...) doesn't make it not complicated in execution. It just means as thought machines we are hopelessly indebted to some kind of cause and effect processing even if we aren't great thinkers.

In review, not only do I believe naturalistic explanations have a great deal of merit, but the explanatory power of the magic version fallaciously displaces the mystery into an incoherent non-explanation that has zero proof of concept otherwise (in terms of corroboration of so called "immaterial things"). Is that well-rounded enough a criticism?

The Fallacy of Composition and All Its Misbegotten Children:

Reppert says:

Carrier’s task is to show that you can build an intentional brick wall out of non-intentional bricks. Just as a brick wall can be six feet tall even though none of the bricks are, a state can be intentional even though the fundamental, underlying states are non-intentional, as is required by the understanding of naturalism that both of us accept.
Can physicalists explain a "flying machine" that is made from all non-flying parts? Tough call. Maybe we should try to build one or something and find out... Fortunately, Reppert actually does get around to addressing the FoC criticism here, but let's see what he does with it:

Reppert says:

But what does Carrier say about intentionality? He says that A material state A is about material state B just in case 'his system contains a pattern corresponding to a pattern in that system, in such a way that computations performed on this system are believed to match and predict behavior in that system.' Unfortunately, this analysis of intentionality is simply loaded with intentional concepts, so if we didn’t know what intentionality was before we heard from Carrier, we wouldn’t know now.

Vallicella encourages us to meditate on this point and I have. First, I would have to label this bit of the exchange the level 2 dialog problem between physicalists and AfR advocates I pointed out above. In my mind I picture a network of mental concepts that need to be explained. All of them are interrelated and require background knowledge of each other to keep each explanation relatively simple. When explaining one item, we don't worry about elaborating on the others and hence each explanation is "loaded" with peripheral level 1 concepts. As we explain each of the others, we don't bother elaborating on each of the ones that have already been covered. If Carrier gives Reppert what he appears to be saying he wants, we'll be at the level 3 dialog problem and then Reppert can reject that too because it's so "unrecognizable." Hence the loop is closed and Reppert is unable to be informed even if he is wrong.

Second, Carrier does point out that the truth of beliefs is about patterns mechanically corresponding to other patterns and a process of continually updating the pattern in our heads to the pattern abroad. It is easy to use one's imagination to map that idea out onto a robot (and note they've already done that) and then anticipate by extension how much more sophisticated "truth" would get as we build more sophisticated truth finding machines. So I have to totally disagree with Reppert, that before this was pointed out (even though in hindsight it's fairly obvious), I didn't have a good definition of truth. Obviously I already knew what it was to function practically, but Carrier's delineation in more mechanical terms was helpful. I'm not sure it's even fair to require a physicalist to be able to inform someone (that does not already know what truth is) what truth is. It seems to be a logical impossibility that has nothing to do with this debate. Hence, that would be an unfair goal post.


Reppert says:

What does 'corresponds' mean in this context? If I’m eating a pancake, and the piece of pancake on my plate resembles slightly the shape of the state of Missouri on the map, can we say that it corresponds to the state of Missouri; that it is a map of Missouri?
Incidentally I'm sure we could build a robot survey machine that records its finding in terms of actual pancakes. If we queried what the shape of a particular landmass was, it could show us a particular pancake from its inventory and that could be the only form of memory of its findings that it possessed. Remove the pancake, it no longer knows. Put the pancake back. Then it knows again. "What is the shape of land mass x that you recently tracked?" Pancake-bot would then show us the Missouri-shaped pancake because it knows that this pancake corresponds to land mass x that was evaluated on a certain date. You could query a million other devices, or perhaps the food or microwaves at IHOP and they wouldn't tell you necessarily what the shape of Missouri is. But this pancake-bot reliably could. The Missouri-shaped pancake (assuming the border of Missouri was something that could actually be figured out on the ground) would be the "virtual model" that was constructed via the fact checking mechanism (some kind of sensor). In this case the information would just be stored in a much more literal and ridiculously inefficient sense than say, abstractly on a disk.

It's pointless for Reppert to spot error 1 here because we've already built robots that do similar tasks and we've had them for decades. I take it that Reppert knows this (perhaps even before he read Carrier's response). We have to ask ourselves what the underlying problem still is. Physicalists can see the illustration of the basic principle at work and that the programmed mechanism does all the "magic." Mechanical intentionality is behavior involving goals. Robots can have goals. Physicalists can then, by extension ask themselves what would happen if pancake-bot were more sophisticated and could in fact do a lot more things with its pancake models of reality. The more dynamic functions we give it, and the more we ask it to be able to do, the more well-rounded, efficient, and seamless the programming would have to become. Who is to say just how far that could some day go? Should we trust Reppert in his "it just doesn't seem like it" armchair philosopher expertise? Conceptually (since there is only a conceptual problem for Reppert), we see no reason that such sophistication could in principle reach drastic heights of complexity that pancake-bot itself might get a little confused and fall prey to a FoC (just like AfR advocates) about its own computational nature.

Perhaps such things can't go that far, but we certainly don't know that from anything Reppert says. Physicalists can afford to be patient and AfR advocates cannot in terms of the argument they would prefer to make today. The most conclusive way to test the hypothesis that the dualist's conceptual objection to physicalism is a FoC is to actually make such a composition and see what happens. If we lived in a world where all such pursuits were dead ends, Reppert might be able to get away with ignoring all the evidence that Carrier points to. As it is, if there is to be any "debate," physicalism has not even been allowed to begin to present its case even though it is well on its way and far surpasses what appear to be the conceptual brain farts of opposing views.

Vallicella moves the goal post on intentionality back to God:

I think it helps to distinguish between original and derivative intentionality. A map is about a chunk of terrain. Its intentionality or aboutness is derivative from original acts of intentionality whereby I assign certain marks on the map to certain geographical features. Thus I or we assign to contour lines that are close together on a topographical map the meaning: steep terrain. The map by itself cannot mean or intend or be about anything. The same goes for extremely complex systems like this Pentium IV sitting in front of me. It is not thinking in any serious sense at all.

What the naturalist cannot account for is original intentionality.

I'm assuming Vallicella is conceding that computer processing is at least thinking in some less than serious sense? Regardless, intentionality is goal-related processing and behavior and we have robots with limited goal-oriented behavior. A debate on "original intentionality" necessarily takes us into a creation and evolution debate. In terms of naturalism, that's where the ability of biological brains to have original intentions comes from. The construction of a robot brain is a possible physical pattern in this universe as demonstrated by the actual construction of said robots. [I pause here a moment to reassure my theistic readers that I'm aware humans built the robots. Continues...] The specified pattern of a mechanical goal-oriented brain is only more complicated than many other biological features theistic evolutionists do not seem to have a problem with embracing in terms of an evolutionary origin. Hence, if evolution can do all the other things theistic evolutionists believe it can do, there is no reason to single out the brain. And for those who do not accept macroevolution at all, they have to argue for creationism and no longer have an argument about intentionality.

A deep intuition that intentionality "just can't come from that" means very little. It is also a fallacy of composition (at a different level) to say that non-intentional evolutionary processes cannot in principle produce intentionality devices like brains. If one wants to say that wouldn't be "real" intentionality, then guess what. There's no such thing as "real intentionality" and never was. Semantics will not save the AfR.

A Big D'oh:

Reppert says:

In order for “correspondences” to be of significance, doesn’t it have to be a “correspondence” recognized by somebody’s conscious mind as being “about” the thing in question?

This is definitely level 2 again, where the goal post of "significance" has been moved from correspondence back to an argument from consciousness. In the way that the machines we currently have know things, that knowledge is significant in that smaller loop where virtual models are literally built (as opposed to the machines not being rigged to build virtual models). One has to have the right expectations and ask the right question at the level the example is currently at. "What is the significance of this sensing and processing?" Obviously the answer is, "It's the construction of a virtual model that can be used to inform further activities." It's not attempting to ask if the system is sophisticated enough to understand its own mechanism of virtual model construction. That would be a big "d'oh" for physicalists if we thought that explained consciousness, but that's just not the claim being addressed. It also doesn't mean that a further iteration of sophistication (that of modeling its own ability to model) is impossible just because only relatively rudimentary examples can be given at this point in time.

The Wonderful Thing About Knowers:

Reppert says:

The intentional state has to be believed to correspond. But how could we define belief if we didn’t have any idea what it was for a mental state to be about something? If I have to believe that brain state X is about object Y only if I believe it to correspond to Y, then how do we analyze my belief that there is a correspondence without throwing us into an infinite regress?
As I said before, it's logically impossible to inform a mind about all things mental without the mind (by definition of being what it is, and being able to do what we expect it to do) already having a decent idea of what these things already are. However, the dots of what you already know can be connected more clearly. One would be hard-pressed to think up a possible scenario where you could literally start with a blank slate audience. Perhaps the childhood developmental gradient into the basics of epistemology (like the idea that not everyone knows the same things) might be applicable. In physicalist terms, that hypothetical "discussion" is more like the actual process of building artificial minds so they'll know how to know things about knowing things (or a child's brain growing into that status). Self-reflective knowers that know that they know things can't be informed about knowledge itself as virgin knowers. They already know they know they know and if you are going to explain anything, you have to start somewhere. Knowledge is systemic, meaning it only has relevance in the ultra sophisticated mechanical set up it is a part of. It's apparent magical stand alone nature is a convenient illusion for the sake of practical mental organizational principles.

Raise Semantic Shields!

Reppert says:

But surely one can “track” something without thinking “about” it. A heat-seeking missile tracks its object, but surely we don’t want to say that it’s activities are in any way intentional.
Surely? A heat seeking missile has to have something analogous to our mental convention of "aboutness" for its systems to be able to distinguish a target from a non-target. What else does it mean for us to talk about something other than to be able to distinguish one thing from another? A heat source is one thing and there are certainly more sophisticated targeting systems. I see no reason why this can't continue by extension all the way up the sophistication graph to our extremely versatile level of aboutness. Clearly this is where AfR advocates prevent the conversation from going anywhere from the ground floor up. I await a volunteer from the AfR camp to try to deflect a missile with semantics. My money is on the intentions of the program. I suppose I could hire a hit man with a bazooka as a control group if they really think having an auxiliary intentional component in the equation means they will be any less blown up in the end. The more sophisticated the target acquisitioning happens to be, it will make more and more perfect sense for a bystander to ask what the intentions of the smart missile are as it navigates complex situations to get to its intended target. In the end, you probably won't know why you shouldn't use the word "about" or "intention" or whatever other sacred terminology AfR advocates trip up on. Words are words and physicalists have just as much a right to define what genuine "aboutness" really is as much as any other worldview.

Vallicella says:

A thermostat can be described as a sensor. But obviously it is not sentient strictu dictu. It cannot feel anything. Clinton can feel your pain, but no thermostat feels the heat or the cold. Both intentionality and qualia are beyond the reach of naturalistic explanations.

No one is saying something like a thermostat is sentient. But it is a basic entry in the mechanical "telling the difference" category that can in turn be built upon for more sophisticated qualia machines.

Reppert says:

In dealing with questions of mind you have to be awfully careful to make sure that the words people are using really do mean what you think they mean, or whether they have been subtly re-defined to make them tractable to a physicalistic account.
Vallicella agrees:

Reppert is right. It is easy to be misled by language.

Not every casual word and phrase lends to their perspective either. It should not be some kind of intellectual crime or conspiracy to understand common sayings in light of other facts and more rigorous understanding of the way things work than most people are willing to apprehend from their own words. If you've grown up in a scientifically literate age where science fiction illustrating every conceivable twist on superficial reality has been illustrated as food for thought, one may find themselves without much clinginess to various words as it appears others do. As cliche' as it is to suggest, "Think outside the box."

At Versus Along (Whoever Wins, We Snooze!):

Reppert says:

In short, I just don’t see it. C. S. Lewis wrote an essay in which he delineated the difference between “looking at” and “looking along.” When you look at something, you view it from a third-person, outside perspective. When you look along something, you view it from within. An attempt to come up with a physicalistic view of the mind invariably ends up looking “at” mental events, and always fails to capture what is going on when you look “along” those same events, as the thinking subject.
For the record, in principle any distinction if it is a real distinction can be captured with more sophisticated logic gates. If someone can articulate it, someone someday will be able to program it. Period. It won't matter what preposition it is as though some are magically exempt. Otherwise, virtually by definition, you aren't talking about anything. If I'm wrong, please feel free to let physicalists know which prepositions are inexplicable and which are good to go naturalistically. Then computer programmers will have goal posts that can't move on them when it comes to that full proof of concept we might have someday.

Who is Conceptually Challenged?

Reppert says:

But perhaps we all are just suffering from a lack of imagination. If so, then Carrier’s reflections on the matter have done nothing to expand my imagination. The suggestion that intentional states could arise in a purely physicalistic universe strikes me as incoherent.
It is doubtful that every (or even many or most) AfR advocates have a literal lack of imagination in general. For instance, I wouldn't consider C. S. Lewis particularly lacking in imagination. Quite the contrary. Ordinary varieties of a-rational aversion and misdirected validation will probably do. Carrier was defining things in terms of what is being presented in the discussion. Who knows how conceptually challenged Reppert is or isn't in his every day life (or why that matters). My current model of AfRite rejection of a physicalist explanation includes: one piece background experience in general theism, one piece fallacious incredulity (FoC), one piece theistic bravado on behalf of a quintessentially mental supernatural worldview that invites this category of self-reflecting circular validation whether it is defensible or not, one piece flakey retaliation to being persistently accused of being unreasonable for being religious, and one piece natural aversion to the general degradation of an orderly and meaningful "middle world" existence in general. Everyone strikes a different balance of factors that contributes to their own particular credence function (as Reppert might say) on a given question. One can hardly blame them from coming from their own perspective from the get-go, or for being validated within its confines. It's not like people can't change their minds. The primary fallacy is all that really needs be addressed, and I think that's been done sufficiently.


In my opinion, Reppert needs to set aside the one philosophical problem ("at" vs "along") and answer how the evidence would be any different if physicalism were true. If in fact his primary underlying problem is a mere FoC (with lots of supporting fallacies) and there really is no conceptual problem with a-rational materials coming together in a special way to form rational thinking human beings such as ourselves, what exactly would be different? Would thinking rational thoughts "seem" different if we had the knock off representational, computer versions? How could Reppert possibly know better? And finally, why should physicalists fear a future religious apocalypse more than a robot apocalypse? :D

As long as Reppert and AfR advocates' primary crux of incredulity is indistinguishable from a FoC and as long as computational and neuroscientists are proceeding on in their understanding and development of better theories of mind, and A. I., it would be unfair indeed to credit the AfR with more "promise" when physicalism is the worldview continuously bringing home more and more "bacon." They can write all the conceptually and philosophically confused books they want until their views blossom into full blown robo-bigotry in the future (your feelings aren't valid, Wall-E, because we built them, etc.). They won't be helping us sort out the moral difficulties that may arise once something like Project Blue Brain nears completion. That's hardly what can be called progress on many levels.