A friend of mine claimed to have the proof that God cannot be omnipotent. I said fine. Lay it on me.
He asked the question, "Can God create a stone he himself cannot lift?"
That was several years ago. I've thought about it. I think the question plays not on omnipotence, but rather our inability to physically comprehend infinity (i.e. picture a universe without end). I know you've had this question before. But what I'm interested in is the answers given in the past. At the time the question was posed, this friend of mine expounded upon several answers given by the Vatican, and if you could dig up a bit of history on this, I would be grateful.
The supposition that there is such a stone so heavy God cannot lift it, implies a contradiction, since it implies both that God can and cannot make such a stone.
St. Thomas Aquinas gave the standard answer to this ancient conundrum: It is that God's omnipotence does not imply that God can do what is logically impossible to do because the "action" of doing what is logically impossible is really not an action at all, since it describes nothing in just the way the phrase "four-sided triangle" describes nothing. Mortals are not omnipotent because they cannot do whatever can be done, such as, for instance, moving a star from one galaxy to another. God is omnipotent because he can do whatever can be done. even (supposedly) shifting a star, since doing that does not imply a logical contradiction. But that God cannot do what cannot be done, namely a logical contradiction, "does not detract from his omnipotence," as Aquinas put it.
I get a little frustrated at times because in philosophy there are so many words I don't understand. The worst part is that I lack discipline when it comes to studying. Sometimes I think that maybe philosophy is not for me. Can you give me some advice?
I have sympathy with you because I used to be a lousy student. Lazy and undisciplined, it took a final essay deadline, or the imminent threat of exams to get me off my back-side. Even now, I have to psych myself up to read an article or a book. I don't find academic study a 'natural' thing to do.
Your impatience and frustration seems to indicate that you are trying to tackle too much, too quickly. Scale down the task. If your teacher gives you a book to read, read one chapter. If they give you a chapter, read a section. If they give you a section, read a page. And if your teacher gives you one page to read well, you can read a page, can't you?
Buy a good philosophical encyclopaedia. Three that I recommend to my Pathways students are the Oxford Companion to Philosophy edited by Ted Honderich, the Cambridge Dictionary of Philosophy edited by Robert Audi, and the Concise Routledge Encyclopaedia of Philosophy, which is based on the eight volume Routledge Encyclopaedia of Philosophy edited by Edward Craig. Take your pick.
The encyclopaedia will help you with philosophical terms, or names of philosophers that you have not encountered before. But don't make the mistake of thinking that all difficulties of understanding can be traced to unfamiliarity with the vocabulary of philosophy. It is a lot harder work reading a piece of philosophy than it is reading just about any other subject. That is why you should expect to encounter difficulties, and not bite off more than you can chew.
I don't know whether or not philosophy for you. If you genuinely feel a need for philosophy, then philosophy is for you. There will be times when a piece of assigned reading, or an essay, defeats you. Expect that to happen. It happens to us all! Try again, or scale down the task, or put that topic on hold while you look for a more accessible point to grapple with the subject. Persist, and your persistence will pay off.
Why do they call a shoe, a shoe, and not a refrigerator?
There is a story of some woman who once asked an astronomer, "How did astronomers know that the planet Jupiter was called "Jupiter?" What is the answer to that question? It is, I think, that they didn't know. They just called Jupiter by the name "Jupiter."
Of course, there were historical reasons. For example, Jupiter was the chief of the Roman Gods, and Jupiter was the largest of the planets. (One of Mozart's symphonies was called "Jupiter" for a similar reason.)
What is important is to distinguish between language and what language is about, the world. Language is conventional. That is to say it is a (tacit) agreement among the users of the language to call things by certain terms. We call a shoe by the term "shoe." But the French call shoes, "soliers" (male shoes) and "chaussures" (female shoes) (And, if you are really interested, the French word for refrigerator is "re'frigerateur.") But there is nothing about shoes that call for their being called "shoes," although there are causes which are discovered by etymologists who trace the history of words. A very interesting subject.
We could have called a refrigerator a shoe because names are arbitrary, although a lot of our language is based upon or derived from Latin, and to this extent it is shaped historically.
A name is arbitrary because it is simply a symbol which acquires cultural currency. Most names stand for things and concepts. Names for things can change. For instance, we used to use the word refrigerator but now we tend to shorten it to fridge and there is no reason why this might not change completely to something such as "cooler". We often adopt American terms for objects and give up the English ones. Names for things can change and this is, in part, because they refer to objects so they stand for something with a determinate description. But when we think of concepts, which are abstract, such as red or good, it is difficult to imagine this sort of change. When we describe something as "good" in a non-moral sense, we might use the Americanism "ace" but we don't give up "good" and "ace" is already dropping out of usage. Refrigerator is a descriptive name, as is cooler, so it might be that we can change the name by using a term with a like meaning and there is no like meaning for "red" and "good".
Alternatively, this could be explained by means of reference and determinacy. A name picks out objects of a particular sort so we can use more than one name to refer to an object because what the object is like provides a determinate definition. If I adopt the term "cooler" for a fridge, I can explain what I'm doing without using the word "fridge" by describing the object. It is explanatory to say that I'm now using the word "cooler" for the thing we use to keep our food cool. We don't have determinate definitions of concepts. If I use different terms for red or good, the only way to explain this is by saying that by "rue", for instance, I mean red. Theories of meaning aim to explain what we mean when we use a word. In the light of your question, it seems to me to be a good starting point to sort out types of word rather than focusing on the meaning of "meaning" or what it is to mean something by a whole proposition. This is what Aristotle was doing in the Categories.
What is a person? That is the bottom line of my question. For most people use the term "person" and yet can not necessarily define what they mean.
In the same line of thought as John Macmurray (The Self as Agent, Persons in Relation) it appears that personhood must be defined in relation to others, and simply not autonomous. What then are the implications of such an approach? Who and what then are persons? Animals, people, God, non-living objects? (try not to address the peripheral issue of what is non-living or has not life.) I have given away a few of my presuppositions, but the bottom line question is, "What is the definition of a person?"
Person is a complicated and ancient notion. It is a concept which definition kills, because it has an axis of meaning, as I shall endeavour to explain.
Our word person comes from the Latin persona. We use the word 'persona' in English in the sense of someone playing a part, or putting on an act. We distinguish the persona from the real person. In Latin persona is related to other concepts we have in English such as personal (personalis) and personality (personalitas), both of which refer to what we would ordinarily think of as the real person, rather than as an act they are putting on. Already in the Latin word from which we gain our word there is an ambiguity between the real person and the 'persona' we wear. The ambiguity about the meaning of person in English harks back to the ambiguity that was already there in Latin.
Of course we can see a person as a thing, as merely an object, but we tend not to. There seems to be more to a person than object behaviour. Today we talk about the dignity of a person and their fundamental human rights. To speak of a person like this is to recognize that a person is not just a thing. Heidegger summed it like this: "Man (a person) is the being for whom being is an issue." The legacy of understanding which our language carries says that a person is different from an animal, even different from some people's zoological description of him or her as a "primate". We call ourselves "primate animals" because our being is an issue for us and we are trying to understand it. "Know Thyself", the Socratic dictum, shows that our being is an issue, that although we are, we don't know what we are. Your question, asks about the same thing, "What is a person?" The ambiguity and difficulty of knowing what a person is, is compounded by the task of being one.
This ambiguity and difficulty was first thought by Greek speaking Christian philosophers in the fourth and fifth centuries of our era and we are still in the sway of that thought. The Latin persona was a translation of the Greek prosopon. The Greek word means 'face'. But to designate what a person is these Greek Christian philosophers used the word hypostasis, which means both 'existence' and 'existent' depending upon the usage, a bit like 'man' in English may refer equivocally to particular man and to mankind. The concept of hypostasis was synonymous in Greek with ousia or in English, 'essence'. Our modern understanding of the concept 'person' still carries the influence of these Christian philosophers. A person is an essence, a universal, but also and at the same time absolutely particular. In other words, a person is different from every other, but also of the same nature. The modern notion of the dignity of each person goes right back to this definitive thinking in the fifth century, although the seeds are of course much more ancient. What a person is belongs to this universality of the self, rather than to the 'individualism' of the self, which is the other pole.
Matthew Del Nevo
Our concept of a person, or a human being, should exclude anything that looks like and seems to be a person but is, say, robotic. Our concept of a person or human being is that it is a conscious biological organism, ideally rational and a language speaker. We discount animals as persons since they are not rational and they are not language speakers. It is true that many persons may not be rational or language speakers, for one reason or another, so the idea of a person as a biological organism is paramount. In some cases, a person may not be conscious, if there is impairment to brain function, so consciousness also takes second place to the biological nature and origin of the organism. The origin of a being determines the type of organism an individual is. If a being comes to fruition through the fertilization of a human egg by human sperm, this is a person. If we adopt this view of origin we can reject proposals that a robot can ever be a human being just because it looks like a human being and behaves as such. This is an objective view.
The subjective view, the acquisition of the concept of oneself, as "I" must be defined in relation to others. One account of why this is so is Wittgenstein's argument against the possibility of a private language. Language is rule-governed, and a person cannot be held to be following a rule alone because he can be mistaken on the criteria for application. On this argument, if I am the only person in the world, I would not possess the concept of myself as a "person" or an "I". However, if I am the only person in a world with other objects, I will learn to distinguish myself as one object amongst others by perceptual means and will naturally possess a subjective view and self-awareness which I don't have to refer to any concept such as "I". The Cartesian "I" is no longer taken to be related to a thought content or experience. I will still be a person even if I don't know it.
I have been doing some reading in scientific thought. I would greatly appreciate some direction and or thoughts on the following two points:
If a science such as physics tries to base its conclusions on the "truths" of the universe, even though scientists try hold to the ideal that their conclusions are not a naive view of what is really true by not depending directly on their perceptions via the senses, are not all of their theories derived at some point and founded on the percepts derived from the very senses from which they do not trust?
Since science operates empirically on induction is it not much more than a leap of faith that even a million experiments is too small a sample to conclude within a reasonable confidence limit, since all the possible experiments that could be done far exceeds those that ever will be done...so much so that those that are done add up to a number approaching zero as those that could be done approach infinity?
1. Your first question reminded me of Bertrand Russell. A quick internet search unearthed the following famous, or infamous quote:
Physics assures us that the occurrences which we call "perceiving" objects, are not likely to resemble the objects except, at best, in certain very abstract ways. We all start from "naive realism", i.e., the doctrine that things are what they seem. We think that the grass is green, that stones are hard, that the snow is cold. But physics assures us that the greenness of grass, the hardness of stones, and the coldness of snow, are not the greenness, hardness, and coldness that we know in our own experience, but something very different. The observer, when he seems to himself to be observing a stone, is really, if physics is to be believed, observing the effects of a stone upon himself. Thus science seems to be at war with itself: when it most means to be objective, it finds itself plunged into subjectivity against its will...
And now the famous bit:
...Naive realism leads to physics, and physics, if true, shows that naive realism is false. Therefore naive realism, if true is false; therefore it is false".
Bertrand Russell, An Inquiry into Meaning and Truth p. 15, 1950. Unwin Paperbacks, London.
Do we have to accept that physics, if true, shows that naive realism is false? And if we do, does it matter? I used to think that physics does show that naive realism is false, but that it doesn't matter. That's what Russell seems to be saying. Physics can still be true, so to hell with our common sense beliefs about the world of our sense perception.
I now think that Russell is far too quick to concede the sceptical argument against the common sense or naive view of perception. Just because a chain of physical causes and effects is involved in human perception, it doesn't follow that when I seem to perceive a chair, what I really 'perceive' is Russellian sense data, or the product of processes going on in my own brain.
However, so far as your question is concerned, what I think isn't important. Either way, physics is still true.
2. Your worry about induction seems at first sight very plausible. Once again I am reminded of Russell. (I won't quote him this time.) Picture this. Each day, as the sun goes down, the farmyard chicken says, 'I wasn't slaughtered today.' So, each day, the inductive evidence in favour of the proposition, 'I won't be slaughtered tomorrow' increases. Are we really in a better position than Russell's chicken?
The chicken's problem is that it lacks the bigger picture. That is always a worry. You thought all swans were white, but you have never visited New Zealand. There is always a doubt whether or not we have selected a representative sample. Even if we put aside that worry, however, there still seems to be a huge discrepancy between the small number of cases examined, and the number of cases that have not been examined, so small, in fact as to make the number of examined cases diminish to an infinitesimal fraction as we increase the angle of view to take in the whole universe.
The worry is groundless. To see this, imagine the following case. There is a large barrel in the cupboard with boiled sweets. The barrel is too big to move, and there is no light in the cupboard. So you fish around, right to the bottom, grab several handfulls of sweets, and examine them in the light of day. Every single one of the sweets is red. Provided the sweets are picked at random so that you have a representative sample, that is excellent evidence that the large majority of sweets in the barrel are red, even if your sample is only a small fraction of the whole. This is what common sense tells us, and what the mathematics of probability theory confirms.
Of course, you can't use this method to prove that every single sweet in the barrel is red. You can't prove that there isn't one blue sweet down there somewhere. The point to make here is that the example of the jam barrel differs in one crucial respect from gathering evidence for scientific theories: the generalizations we seek to gather inductive evidence for in science are lawlike. If there is a contrary instance somewhere, then we shall look for, and can expect to find a relevant difference that explains it.
A.V. Ravishankar asked:
Our daily linguistic usage and our communication is a form common sense reasoning. If it is so how can we formalize common sense? How is common sense reasoning used in explaining counterfactuals used in daily linguistic discourse?
You cannot formalize common sense. We know what makes sense to us but logical formalizations sometimes come up with nonsense. Logical validity is not the same as validity in ordinary language. Even connectives such as "and" and "or" are not translatable between logic and English such that they always make sense. Mark Sainsbury (you should read his book Logical Form) has used the following example of a logically valid argument using conditionals which is not valid in ordinary language:
(1) If Smith dies before the election, Jones will win.
(2) If Jones wins, Smith will retire from public life after the election.
(3) So, if Smith dies before the election, he will retire from public life after the election.
We can think of grounds for the premises (1) and (2) but the conclusion (3) is absurd. There are theories which aim to provide an account of what needs to be added to logical formulations so that they reflect ordinary language usage, but these fail to account for the above example. H.P. Grice argues that a conditional should have assertibility. The conditional "If Ice is denser than water, it floats in water" is true as a logical formulation because the consequent is true, but it is not a reflection of common sense and it is not assertable. Grice's suggestion works for the ice example, but the Smith and Jones example can't be explained by the non-assertibility of the conclusion because it is a problem of connectives being non-translatable.
In the example "If Oswald didn't kill Kennedy, then someone else did" we have the evidence that Kennedy was killed, and so this is highly assertable. For an ordinary conditional we need reason to believe in the antecedent.
Counterfactuals are further from common sense. "If Oswald hadn't killed Kennedy, someone else would have" requires additional assumptions if it is true. Sometimes there are no assumptions which we can make, such as in the case of Michael Dummett's example: Suppose Jones is now dead and never faced danger in his life. We have no evidence or grounds to suppose anything about the truth of the counterfactual "If Jones had faced danger he would have acted bravely". We have no knowledge of the counterfactual person Jones and what he might have done. On Dummett's view we know what would make this true, i.e. that Jones was in fact brave. However, it is possible that he was not. The talk here is of possibilities rather than good reasons for making the statement, or assertibility as Grice would have it. There are two possible worlds which determine the truth-value of this counterfactual and a possible world is an imagined state of affairs, at least on some accounts. On any account it has nothing to with common sense. Read David Lewis.
I'm very interested in philosophy and I want to learn how to comprehend theory in the same way as other intellectuals. I'm only 17 and I've purchased Plato's Republic and it seems very interesting to me. Is there anything you can tell me about becoming a philosopher in a figurative sense? I often observe my peers at school and I write things down about their actions and my predictions as to why they act in the ways they do. I find it very interesting, as well as challenging, to find out why they are behaving this way. Is there more there to teach me advanced ways to go about searching? Where can I find this information?
First I think you may be confusing philosophy with psychology. Psychology is the science that investigates why people behave as they do, not philosophy.
So I think that the first step you have to take to become a philosopher in any sense is to decide that philosophy is, and not confuse it with something else, since if you do, you may find that you want to become something else and not a philosopher at all.
This site has information on what philosophers do, and there are many of the sites on the Internet that do this too. Go, for instance to Epistemelinks.com. Or go to Askme.com and register and ask a question.
Why do you want to become a philosopher in the figurative sense, anyway? Perhaps you mean that although you don't want to become a professional philosopher, you want to be an amateur philosopher: think about philosophy without getting paid to think about philosophy.
But, as I have already said, if you don't know what philosophy is, and what philosophers do, you are going to find it difficult to decide whether you want to be a philosopher at all, never mind the "sense."
How have Teleological and Deontological ethics influenced major political philosophies of the 18th, 19th and 20th centuries? What countries represent such ethics?
Can you explain to me the origins of Social Contract Philosophy?
1. One interesting case of deontological ethics influencing political developments, is the idea of natural rights: that every one has certain rights that are inviolable such as the right to life liberty and property. This idea, advocated by John Locke, is adapted in the American Constitution, 1787 and the Bill Of Rights. (Many historians and political analysts would say that such influence started with the Enlightenment and the French revolution, But this is controversial.)
Probably the most common sited example of consequentialist ethics influencing politics is Utilitarianism, which was embraced by conflicting political opponents, including both liberal reformist positions and the conservative "laissez faire" economic ideology (although utilitarians themselves were divided on the question of government intervention in the free-market economy of Victorian England).
The 20th century can be seen as a mixture of both deontological and teleological influences. This probably reflects the distinction J.S. Mill makes between the private and public life. On the one hand there is the need by governments to protect the rights of the individual and on the other hand there is the need to provide the good for society as a whole.
Combining these two aims into a successful and functioning political philosophy is perhaps the aim of the 21st century.
2. The idea of the social contract can be traced to Plato (see his dialogue Crito and the Republic Book 2). Plato discusses the contract as a defence against harm and suffering. While Plato does not accept this idea, a similar view was developed by Hobbes in his Leviathan. The idea has also been discussed by Locke and Rousseau.
The common thought to all these philosophers is that humans pre-socially are in a State of Nature and because of the conditions of this state of nature come together to form a society. For Hobbes the state of nature was one of constant fear of attack and death, a war of all against all. This fear of death leads to the formation of a state which would ensure an individuals safety.
The Social contract has survived into present day, and takes two basic forms. One following Hobbes stresses the of physical powers of individuals and the advantages of cooperating in a society in order to preserve each others interests. The other is a Kantian idea based on the recognition of equal moral worth and status, in which each persons welfare is a matter of impersonal concern. This account has found its most famous and detailed defence in Rawls's A Theory Of Justice.
Dept of Philosophy
University of Sheffield
Is the notion of a social contract a useful device for the solution of problems in political philosophy?
Typically issues that political philosophy deals with are the questions of what makes a society just, of how we can reconcile liberty and equality and why we should be obliged to obey governments. The notion of a social contract seems to offer a viable answer to these questions: a just society is one where the individuals come together and agree to a form of government in order to ensure security for themselves. Some degree of liberty may have to be traded for equality but this is compensated for within the terms of the contract.
Now there are various forms the social contract can take and various problems associated with them, however there are two particularly interesting and important objections against social contract theories, which I think show that social contract's do not play a very useful role in dealing with political problems.
The idea behind the first argument is that the social contract theory presents a picture of individuals as basically selfish and egocentric. Here the social contract is an opportunity to bargain for their best situation for oneself to promote one's interests and security. (Even in Rawls' system which is based on the Kantian idea of a person as an end in themselves where an individual has intrinsic moral worth, the principles of justice that are chosen in the Original Position Rawls version of the contract reflect the fact that in the original position I do not know which person I will be in the world or what my social situation will be like. In choosing I had therefore, better chose principles that will provide the best possible situation for everyone.) The first objection to social contract then is that I think this selfish attitude is implausible. I do not think that it is a accurate picture of human motivation.
Certainly we have personal motives, requirements and interests, but at the same time as we think about these, we also recognise the motives and requirements of others. These motives of the others form the basis of our moral lives and do not need to be imposed on us by society or government. (Or to be more specific, the recognition of others claims does not need to be imposed on us.) One may think that even so the solution needs to imposed on us and this is where social contract's come in, I don't think this is right. The solution does not need to be imposed on us, political institutions do not generate the solutions to the impasse between the personal and the impersonal, but are the results of proposed solutions.
Even if a view of social contract could be formulated so that this objection was overcome there is a second objection which is that any social contract would presuppose and therefore could not generate an idea of justice. And if a social contract could not generate or justify principles of justice then it could not help in solving political problems. For example Rawls admits that if individuals in the original position are disposed to gamble or take risks, then they may propose principles of justice other than those Rawls suggests would be chosen. They may chose Utilitarian rather than maxi/min principles, for example. (See A Theory of Justice Sec. 20)
But if different theories offer different principles then we would have to decide prior to entering into the contract situation which theory we accept. The social contract would then be either trivial or redundant.
Hume may have had a similar point in mind when he criticised the social contract tradition. Social contract theorists say we need to obey governments because we have promised to, but Hume asks, Why should we keep our promises? (see Of The Original Contract).
The social contract theorist cannot give an answer without giving a prior justification for keeping our promises, other than to say that we have promised to do so.
It seems then that social contract requires a system of justice and morality before they be of any use, but then what use would we have for social contracts if we already know the basic principle of justice?
Dept of Philosophy
University of Sheffield
Since there was no historical event of establishing a contract, the question comes to whether the notion is a useful model for understanding relations between a society and its members. It is like trying to understand the eye on the model of a camera, or the human brain on the model of a computer. A model is an analogy: and analogies come in two kinds: illustrative and argumentative. An illustrative analogy attempts to make the unfamiliar understandable in terms of the familiar (camera-eye, for example). It is, if a good analogy, supposed to be an illuminating teaching device. An argumentative analogy is an attempt to argue from the fact that two things are alike in certain ways, that they are likely to be alike in further ways. It is a predictive device. The Social contract is used in the first way, not usually in the second. How illuminating is it? That depends on how close the analogy is.
It has also been believed that the notion of the social contract is a kind of explanation or justification of the relations between the citizen and the society. In this respect, as David Hume argued, it seems to be a failure. Hume pointed out that contracts already assume a society, and therefore cannot explain a society. A contract is a kind of promise, and promises suppose obligations since it is an obligation, thus it cannot be the justification of obligations.
It is interesting that despite Hume's criticisms, "contractarianism" ("contractism" is my choice) has attained a great deal of currency recently, especially through the writings of John Rawls, although I have never seen an adequate reply to Hume.
Kant is said to be a strong opponent of the ethical relativist position. But his Categorical Imperative seems to me to be pretty supportive of ethical relativism, and here's why. I think that reasonable/rational people can disagree on "hard case" issues . These same people would also be willing to make their decision into a universal law. It seems to me that the Categorical Imperative, therefore, is really a restating of the relativist position, and could not be used to, say, settle an argument about medical ethics. Am I missing some subtle philosophical point here, or is the Categorical Imperative only universal from the point of view of my being willing to impose my ideas on everybody, and subjective from the various points of view held by various people?
Perhaps another way to state my question is this: does the Categorical Imperative aim only to help individuals make a decision that is right for them, or does it aim to give a formula by which all reasonable people would come to the same conclusion regarding some of the "hard cases" we hear so much about in philosophy? Please help, my textbook has me going around in circles on this one! Cheers!
You have done an excellent job of laying out the problem facing Kant's Categorical Imperative. I have heard the criticism voiced that any action whatsoever can be interpreted in such a way as to satisfy the Categorical Imperative, e.g. 'Only fifty year old philosophers who live in Woodseats, Sheffield and wear bottle green V-neck jumpers are permitted to rob banks.' I can quite happily will that rule as a universal law, it is claimed, secure in the knowledge that I am, in fact, the only individual who falls under the description. Kant would have no difficulty brushing aside such a specious objection.
When it comes to genuine 'hard cases', things are quite different. For example, the pro-abortion and the anti-abortion campaigners would each like to see their view of abortion made a law for all.
It is clear from this example why it won't do to regard the Categorical Imperative as a way of making a decision that is 'right for you'. In the eyes of the anti-abortionist, abortion is equivalent to murder. To say, 'I would never seek an abortion, but I do not object if other women do' is like saying, 'I would never commit a murder, but I do not object if others do.'
There are two ways you can go. Richard Hare, author of The Language of Morals (1952), Freedom and Reason (1963), and, more recently, Moral Thinking (1981) has argued that a necessary defining characteristic of a moral judgement or 'prescription' is its universalizability. However, as we have seen in the case of abortion, many of our moral beliefs that pass the universalizability test still fail to meet the requirements of a universal moral principle. Hare calls the beliefs that fail fanatical. The pro- and anti-abortionist are 'fanatics' in this technical sense because each wishes to impose their view on everyone, regardless of their views. Can any moral principle be non-fanatical? Hare thinks so. The principle, Choose the action which leads to the maximum satisfaction of individual preferences is the only principle which would be acceptable to all those individuals who were not fanatical.
Notoriously, Hare's advocacy of preference utilitarianism leads him to embrace the conclusion that in a society of Nazis sufficiently 'heroic' in their hatred of Jews, the former might under certain circumstances be morally justified in exterminating the latter ('Ethical Theory and Utilitarianism', in Contemporary British Philosophy H.D Lewis Ed. Unwin 1976, cf. pp. 1212). Hare's defence of this seemingly outrageous claim is that such a situation would be extremely unlikely to arise in the real world. Likely or not, it goes without saying that Kant would have regarded such a notion with the contempt that it deserves.
In his Groundwork for the Metaphysics of Morals, Kant takes an alternative route. His successive formulations of the categorical imperative reveal an increasingly teleological element. So that, 'Act only on that maxim that you would will to be a universal law' becomes, 'Act in such a way as to treat humanity, whether in others or in your own person, as an end in itself and not merely a means', which in turn becomes, 'Act as a law-making member of the Kingdom of Ends'. (The Kingdom of Ends is an ideal community of rational beings who are end-in-themselves for the very reason that they are, each and every one, the authors of the moral law.) Kant's strategy is the same as Hare's: to develop the notion of universalizability to the point where it would no longer be capable of sanctioning rival moral principles.
No-one can fail to be impressed by the nobility of Kant's vision. Making moral law, making a society in which we can all exist in harmony with one another as moral law makers, is our ultimate goal in life. It is a vision that blinds by its very lucidity. If we cannot all agree about how to live, that can only be because of a failure of rationality. For we ourselves are the reason why reason exists in the first place! Here Aristotle's idea of the Good Life as the life fit for rational beings to live is brought to its logical conclusion.
You might look at a later work, The Metaphysical Principles of Virtue where Kant tries much harder to give convincing derivations of specific moral rules from his Categorical Imperative, by comparison with the relatively perfunctory examples given in the Groundwork. I also think you should look at criticisms of Kant's strategy from a Hegelian perspective, for example, the very readable chapter on 'Duty for Duty's Sake' in F.H. Bradley's brilliant Ethical Studies (2nd Edn 1927).
When we are engaged in moral practical reasoning we might be willing to make our decision into a universal law but the rational principle is that we should all be able to will the maxim such that it has the binding force of law. Based in rationality, as it is, the categorical imperative, rules against relativism. Habermas holds that the Categorical Imperative is a principle of justification which can be used to discriminate morally valid from morally invalid principles. It constitutes a norm against which to test whether our principles are relativistic or not, asking us to reflect upon whether our moral principle is rooted in cultural facts about ourselves.
As to moral differences and disagreements, when these occur, there will be non-rational reasons. We might take euthanasia as a medical ethical example. Euthanasia may be regarded as moral in one country and not in another. In the former case the ground would be to minimise suffering, in the latter that any form of killing is wrong. You cannot universalise a principle that we should kill someone to minimise suffering since this is not a decision that we would all take as having the binding force of law because it is based on differing inclinations between persons. If a principle makes reference to anything subjective such as your ideas or point of view it cannot become a universal principle.
It is true that the Categorical Imperative is not very helpful. It does seem as though there is not much that we can universalise as a law. Normally in practical reasoning we have to take account of the circumstances and we are also guided by moral inclinations, such as feelings of consideration for others. Kant argued that to be moral is to be guided by duty rather than inclination and made no allowance for circumstances. He held that you should never kill and never lie. He once asserted that even when a man who wants to kill your friend asks you where your friend is you shouldn't lie about this!
However, Kant's ethics is essentially a theoretical account of morality, and so necessarily abstracts from everyday practical issues. General principles cannot contain all the rules for application or provide answers to hard cases. Take the "hard case" of whether to kill one to save twenty. You can't universalise a law that we should always kill one to save twenty (put aside, for the moment, that according to the Categorical Imperative you shouldn't kill at all) because of possible circumstances such as the one being a decent person and the twenty being evil. If you were to apply the Categorical Imperative to this particular case in the sense of using it as a guide to behaviour or reason for action, you come up with the supposedly moral imperative that you "should" kill a person. So the Categorical Imperative should rather be used to sort out the moral from non-moral principle in terms of it ability to pick out principles which are based on cultural prejudice, as mentioned above, and also as determining whether our principles are based upon emotion, such as in the medical ethics case. It should be understood as a higher ethical principle rather than a principle of practical reason.
The one or twenty case is used to test our moral inclinations. Kant doesn't deny that we have inclinations. We might, as Bernard Williams suggests, simply be too "squeamish" to kill one. Testing our moral inclinations might lead to some moral insights into our nature, but cannot produce a theory of justification for moral principles as Kant attempts to do.
Kant's ethical theory reflects our idea that as rational beings we have duties to others and can act upon those duties and if we can shape our inclinations such that they don't conflict with duty, there is the possibility of a truly moral action. It also reflects the rigidity of a moral attitude, that there are some things we must not do because they are simply not moral, like hurting and killing others. The Categorical Imperative encapsulates our idea that a good man abides by certain rules of justice. Morality is not about imposing your ideas on others.
The situation ethics of Fletcher is most widely understood as a Christian ethic, based fundamentally upon the principle of 'agape' or unconditional love for the neighbour. Situation ethics on the other hand is contextual, and the morally permissible action must be understood in terms of the situation itself. How valid is it to criticize situation ethics as a conditional theory which rests on an unconditional principle? Are these theoretical characteristics not incompatible and contradictory?
Fletcher's basic principle is that there is nothing which is universally prohibited, there are no rules to tell us what to do and what not to do. However, there is something (for Fletcher one thing and one thing only) that is intrinsically valuable and good that can prescribe action. This is agape. So long as we act out of love we are acting morally. But it is a separate question what it is to act from the principle of agape. Perhaps Situation ethics can be summed up in a quote, "There is only one ultimate and invariable duty and its formula is, 'Thou shall love thy neighbour as thyself'. How to do this is another question, but this is the whole of the moral duty" (V. Temple).
This kind of structure is not uncommon in ethical theory. For example Utilitarianism has a concept of a unconditional good, namely happiness. For the Utilitarian any action that is considered moral must be one that promotes the greatest degree of happiness (for the greatest number). But what actually constitutes promoting happiness, how this is to be achieved is a different issue. And will be answered differently on different occasions. Similarly for the principle of agape. While we must always act from and in accordance with the motive unconditional love, what concrete actions we do take will be determined by the situation (and not just physical aspects of the situation such as time and place, but also psychological aspects i.e. what beliefs we have and what abilities we have). In this respect Situation ethics is no more controversial than Utilitarianism (though this does not let situation ethics of lightly, for utilitarianism is very controversial!)
The point is that the two theoretical characteristics are not contradictory, they be even be complementary. To see this, try to imagine either of them on it's own and see if it is successful in generating moral behaviour:
First the fundamental principle to love your neighbour or to promote the greatest happiness. On it's own this is empty, it tells us nothing about what to do. Second, the situation itself. Suppose I am in a burning building which happens to contain my disabled father and a doctor with a cure for a killer disease. What am I going to do? Nothing in the situation will tell me, perhaps even no rule based ethic could resolve the issue, such as the rule 'Never kill any one'. What ever I do, someone will die. But now if we put the two together we get an answer. The Utilitarian would say, save the one which will lead to the most happiness. Fletcher would say, save the one which would be in accordance with acting from love. I can't say which one because I am not in that situation. Which is the whole point of situation ethics. Fletcher does not say that the structure of situation ethics is like this; "if you are in a burning building, then this action X is the one that you should perform because X is the one that conforms with the principle of agape".
We might find Situation ethics unsatisfactory as an account of what it is to be moral, but that is a defect of Fletcher's arguments and assumptions in his account of what agape is, and not I think, an inconsistency in the general structure of this type of ethical theory.
Though of course it is possible to reject this structural feature. Subjectivist and relativist ethics reject this framework in favour of some single permissive principle such as, "Whatever the situation, do whatever is necessary achieve what you want". Other non-consequentialist, non-subjectivist ethical theories reject the above structure in favour of a rule, or set of rules intended to guide actions regardless of the situation. Which one we should accept is a separate question.
Dept of Philosophy
University of Sheffield
What is (if there is one) a complete and accurate definition of Ethical Egoism? James Rachels and some philosophical dictionaries define it using the terminology of the "promotion" of one's self interest whereas Ayn Rand claims that it relies on the "achievement" of one's self interest. This distinction is very important when evaluating the arguments because one places emphasis on the process and the other on the conclusion or result.
There is, indeed, a big difference between the two formulations of ethical egoism.
It seems to me that Rachels is right and Rand is (as usual) wrong and confused. First, I don't understand what Rand means. Does she mean that unless you get what you want you are not an ethical egoist, or does she mean that if you get what you want you are an ethical egoist, or both. That is, is achievement of one's self-interest a necessary condition, or a sufficient condition or both, of being an ethical egoist?
Let's consider the first, that it is a necessary condition. That means if you strive to get something, but happen to fail though bad luck (you have an accident) then that means you are not an ethical egoist. That's peculiar don't you think? It means that something that is an accident could prevent you from being an ethical egoist. Let's go to the second formulation: that is, if you get what you want you are an ethical egoist. Again, the same objection: suppose you get what you want purely by accident. A rich uncle dies and leaves you a great deal of money, and you want to be rich. That alone makes you an ethical egoist. I don't think so.
What Rachels would say is that it is the motive, not the achievement, which makes you an ethical egoist. If you think that the promotion of only your self-interest is a good thing, then whether or not you succeed or fail to get satisfy your self-interest which, as we saw, may be only a matter of luck, you are still an ethical egoist. So, Rachels is right, and Rand, as usual, wrong.