Ray, I would encourage you not to abandon our discussion simply because we have not yet agreed. This should be an opportunity for both of us to learn something at a deeper level. Indeed, perhaps I have not understood you. Usually, I would maintain, this comes about by virtue of the same words being used in different ways by the two people.REveritt wrote:Bill, we seem to be talking past each other in this thread. I have not succeeded in explaining my ethical stance to you, and perhaps I have failed to understand yours. You have given me something to think about, but I am going to leave this discussion as-is for now.So what I think that I am hearing is that you do not yet have an UEP other than that you should have no UEP for fear that you may change your mind. And if you need to constantly try to improve your ethical stance (something I, as a Humanian, would strongly agree with), but you have no way of legitimating your ethical stances (because you are still waiting to make up your mind), then you could just as easily constantly try to improve your beliefs about what you should do to become a better and better terrorist.
There is a great tendency for people to discontinue dialogue simply because they do not agree, rather than to continue the dialogue with a more precise exploration of where the basic disagreement occurs. Of course, sometimes people do not wish to have a particular belief questioned because of the comfort or joy it provides. I doubt that this is a factor here.
I am indeed still trying to clarify in my mind the basic philosophical proposition that we have a different opinion about. It seems to me that it has something to do with legitimization of propositions or beliefs.
By legitimization I mean the reason I believe something and the reason that I believe you should have for believing it (the same reason). If the proposition meets the criterion for legitimization, then presumably we have good reason for believing it. If we do not agree with the criterion for legitimization, then we do not have grounds for agreeing, even if we still happen to believe the same thing. (For instance, you and I may believe the same thing but for different reasons.) So for us really to agree completely, we would have to be believing for the same reason(s), and therefore using the same criterion for agreement.
So the question is how one would legitimate an ethical proposition.
(If one has no criterion for legitimization, then the acceptance of the proposition would be 93arbitrary94. There would indeed be reasons, emotional, social, psychological, etc., why one was accepting the proposition, but the reasons would not be ones that we would agree would be acceptable to base any beliefs upon, in that they could also result in what we would agree would be unacceptable beliefs.)
So the method for legitimization of ethical propositions that I am using, is one that I think most everyone who does any deep thinking in this area uses also. It is the demonstration that the ethical proposition is deducible by virtue of a syllogism. (We have come to accept the rules of logic as an important set of criteria for legitimization, and doing so is usually considered a part of the definition of being "rational".)
So if my proposed ethical proposition is A, then it would be legitimated by the following demonstration (with of course the agreement that the first two propositions have already been, or readily can be, accepted):
B is true.
B implies A.
Therefore A is true.
B is a higher level ethical proposition. 93B implies A94 is usually an existential proposition about the way the world is, was, or will be, or it may simply be a definition (A is simply an example of B).
I should not kill that man.
If I pull this trigger, I will kill that man. (Existential proposition)
Therefore, I should not pull this trigger.
I should not kill humans.
That man is a human. (A statement true simply by definition)
Therefore I should not kill that man.
But what appears to me is these higher and higher level ethical propositions can only go so high. If there were some sort of infinite regress, then there could be no highest level ethical proposition (or principle). Also, I think one would run out of candidate higher level principles. I certainly do.
So the basic options that one has are:
- I will accept no ethical principle as the highest level ethical principle or ultimate ethical principle (UEP).
- I will accept an UEP, and it is X.
- Bill accepts an UEP, and it is the REUEP, with the recognition of the possibility of later having to change his mind.
- Ray is waiting to accept an UEP, so as to avoid the possibility of later having to change his mind.
So Ray, am I analyzing this correctly? It92s not an easy area to think about, but I think it is important.
Please don92t give up on this discussion. It certainly is helping me. I hope that you will continue to get something out of it, and I value your ideas.