(1) Penny asked:
This is a question about justice, the law and the duty of bystanders who witness a crime.
A philosopher friend criticised my son as a 'snitch' for going to court to testify in a case as a witness (whereas I thought he was being public spirited, and argued that justice through the courts can only be achieved when people are prepared to testify even in the face of intimidation).
My son and three teenage friends were the only other customers in a family-run Pakistani restaurant which a group of aggressive white men trashed when they were unhappy with the service. They also attacked and injured one of the waiters, a clever sixth former in the same school my son and his friends attended, leaving him with some brain damage.
After much discussion of this and other hypothetical examples, my philosopher friend's reasoning seemed to be that giving evidence against people who have done nothing to you is not your business. If that evidence is given to authorities with coercive authority over people, it constitutes an act of aggression against others. He argued that it goes against Kant's first formulation; and challenged me to devise an appropriate maxim that would always hold true and I couldn't. As he wrote to me about it, 'If a principle cannot be universalised without contradiction it is not true and cannot be true. It may be an emotionally attractive principle and make you feel better, but it still isn't true.'
He agreed that I could report a robbery (or other crime) in progress to the police to allow them to do their duty and then go about my business, or I could intervene directly in the situation myself. But he claimed that I could not justify giving evidence in court after the fact.
I am interested in philosophy but am very poor at following through to logical conclusions. I asked if his was a very hardline Kantian position, as I couldn't imagine any of the usual secular humanist Kantian philosophers whose articles I read in the Guardian or wherever taking the same line, but he claimed that was the logical application of the CI in this case and there was no getting round it.
Is he right?
No, your friend is not right. The claim is that witnesses to a crime not only do not have the moral obligation to testify in court, but indeed are morally obliged not to testify. As justification for this claim your friend offers the first formulation of Kant's Categorical Imperative:
Act only according to that maxim whereby you can at the same time will that it should become a universal law.
Immanuel Kant, Groundwork of the Metaphysic of Morals (Quoted from the Wikipedia article on Kant's Categorical Imperative)
So we have to propositions to consider: First, whether witnesses to a crime are under a moral obligation not to offer themselves up voluntarily in order to testify in court; Second, whether this claim follows from Kant's Categorical Imperative, or, more specifically, from the first formulation of the Categorical Imperative.
Let's look at the first claim. One of the basic regulative principles which govern the way arguments in moral philosophy are conducted concerns the way we test proposed moral theories or philosophical claims about ethics against our intuitions, i.e. our ethical beliefs prior to conducting a philosophical examination. The American philosopher John Rawls, author of A Theory of Justice (1971) has coined a nice term for this, which has become part of the contemporary philosophical vocabulary: he calls it reflective equilibrium.
When you make a claim, on the basis of a theory, which goes against unreflective moral intuitions then there are potentially two possible outcomes. Either one rejects the intuitions, or one rejects the theory. No moral theory is sacrosanct in this regard.
If witnesses to a crime never have the moral obligation or even the right to testify in court that would strike a blow at the very basis of our system of justice. The outcome would be intolerable in a civilized society. You know this. That is why the response from your 'philosopher' friend has left you so perplexed.
Now, it could well be that your friend has seized on this example as an argument against Kant's Categorical Imperative. This is familiar territory for moral philosophers. Even if one does not accept Kant's Categorical Imperative, one would be disinclined to accept the conclusion that Kant was just stupid, and didn't see an obvious negative consequence of his view. (Here, I am invoking another regulative principle, the Principle of Charity.) In other words, even philosophers who are not Kantians, have an interest in showing how Kant might have dealt with this challenge to his theory.
Suppose you were to say, 'Any time someone finds themself in the circumstances I have described [you then go on to describe the circumstances in detail] is under a moral obligation to testify.' This looks like a cheat, and it is. Kant would reply that more is required to make a maxim truly 'universal' than simply expressing it in the logical form of a universal statement.
Yet surely it is not the case that at all times and at all places, a witness to a 'crime' is morally obliged to testify in court. If as a student during the Third Reich I had the misfortune to hear my professor uttering words of criticism of Adolf Hitler, I am not morally obliged (even though I may be obliged by Nazi law) to attend as a witness for the prosecution. (There is, of course, a potential moral dilemma here for anyone who holds that there is a moral obligation to always obey the law, whether you agree with it or not: The issues are explored in the ISFP Fellowship dissertation by George Brooks on Positive Law Theory and its application to the case of Nazi Germany.)
The challenge for Kantians would be to find an acceptable path between the overly lax and overly rigid formulations of what the maxim of your action would be in this case. The result which we want is one where there is a moral obligation to testify in cases like that of the restaurant thugs, but no moral obligation to testify, or indeed a moral obligation not to testify, in cases like that of the outspoken professor.
One possibility would be to incorporate the caveat that testifying 'serves the interests of justice'. Once again, however, that makes things too easy. The Categorical Imperative was supposed to be the infallible touchstone of moral action, but now we would be appealing to a prior understanding of what is 'justice' or what actions are 'just' or 'unjust'. Nor, indeed, would we want it to be the case that whenever witnesses are asked to testify, they first have to decide for themselves what does or does not serve the interests of justice. That is why we have judges.
In some ways, the challenge to the Categorical Imperative looks similar to the case of lying. Kant notoriously argued that it was never right to tell a lie, even in the case where a crazed axeman is pursuing his intended victim and demands to know, 'Which way did he go?' (In his essay, 'On the Supposed Right to Lie Because of Philanthropic Concerns', Kant argues, unconvincingly, that e.g. if you say, 'He went left' thinking that he went right, and in fact unknown to you the victim did go left, then you would bear full moral responsibility for the outcome.)
Despite the well-known objections, I do think that Kant is onto something important in the case of lying (see Unit 5 of the Ethical Dilemmas program). We have to recognize as Kant apparently did not that even for the impeccably 'good will' some times there can be irresolvable ethical dilemmas. Whatever you do will be 'wrong', so you have choose the lesser of two evils.
In the case of the obligation to testify, more is needed than simply the rule that one must always tell the truth. I can simply refuse to enter into the court room. So the challenge for the Kantian in the case of the obligation to testify is, if anything, harder than the challenge in the case of apparent counterexamples to the moral principle that one should never tell a lie.
If the challenge can't be met, then that is bad news for the claim of the Categorical Imperative to provide an infallible touchstone for ethics, and your moral intuitions about your son testifying in court survive. On the other hand, if the challenge can be met, then once again your moral intuitions survive. Either way you are right and your 'philosopher friend' is wrong.
Can the challenge to Kant's Categorical Imperative be met? My hunch is that Kant's strategy would be to invoke the Third formulation of the Categorical Imperative:
Therefore, every rational being must so act as if he were through his maxim always a legislating member in the universal kingdom of ends.
Immanuel Kant, Groundwork of the Metaphysic of Morals (ibid.)
A 'kingdom of ends' in Kant's conception is not a mere collection of isolated individuals, each of whom takes care not to encroach on the moral rights of others. On the contrary, Kant's vision is overtly teleological, something that was not apparent in the first (or indeed the second) formulation. In a kingdom of ends each of us has a responsibility for actively supporting the state and the rule of law.
That doesn't mean I have to set myself up as judge and jury. It does mean that one has to acknowledge one's duties as a citizen. In contemporary terms, that includes voting, jury service, and, where necessary, attending as a witness in court.
My intuition is that there is indeed a fine line between responsible citizenship and being a busybody or a 'snitch'. In a relatively trivial matter like littering or indecent behaviour I would rather not be called upon to play my part in oiling the wheels of justice. In such cases, the Categorical Imperative does look like a rather blunt instrument, but I don't know of any moral theory which would fare better. So much the worse, some would say, for 'moral theory'.
The first thing you need to realise is that philosophy is not science. Kant didn't make some fundamental discoveries like Newton's laws of gravity that we are all agreed on. Like your friend I am also trained as a philosopher and I have read all of Kant's writings. Unlike your friend I think that everything Kant wrote is nonsense. Its not obvious nonsense but it is nonsense and I don't know many other philosophers who agree with Kant about moral principles.
I think if you ask your friends you will find that they don't know anything about Kant and that they have to do their moral reasoning without reference to obscure philosophers.
To say that it is wrong to give evidence in court, against criminals who have caused severe injury to someone, by using spurious pseudo philosophical reasoning shows that your friend is dangerously wrong. He is not only a poor philosopher but he has a dangerously defective sense of morality. I feel sorry for him and I certainly wouldn't trust him as a friend.
However let's follow your friend's nonsensical reasoning. Suppose your friend saw you son being shot and killed and that he knew who the murderer was. Since the police have no time to prevent the murder and he has no time to intervene all that is left is bringing the killer to justice. However your friend who saw the murder would refuse to give evidence in court and would excuse himself by using his own spurious twisted version of something written by Kant that is not true even when interpreted correctly.
I would suggest that your friend needs philosophical re-education. However he also urgently needs moral re-education. Your son acted bravely and should be praised for what he did not subject to the ignorant criticism of a pedantic ignorant person.
I make no pretence to be qualified to offer an interpretation of Kant's ethical theories. (I am in fact going to be very interested to see what answers those better qualified have offered you, when Geoffrey posts those answers on a new Answers page.)
So I don't intend a direct answer to your question. What I am going to do is offer a few thoughts from a different ethical perspective. The difficulty I find with Kant's approach to 'Duty Ethics,' and his maxim that one should treat other people as if they are ends in themselves and not as means to your own ends, is that it seems to be incompatible with the fundamental observation that human beings are an evolved species. Kant's ethical principles result in behavioural rules that never could have evolved. Hence they must be provided with some other form of evolutionary basis.
Kant would probably argue that his ethical principles are the logically necessary result of the application of reason. And it is our ability to reason that has evolved, rather than the ethical principles themselves. But as David Hume pointed out, reason itself is not a motivator. We can apply all the reason we want, and still not be motivated to implement the dictates of our reasoning. What is necessary is a want, desire, or need (or a 'passion': as Hume calls it) to motivate us to implement what reason dictates.
Kant's ethical system therefore relies for it effectiveness on the education and training of the people. Only by inculcating into the populace a stand-alone (and unquestioning) desire to 'do the right thing' (and/or 'avoid the wrong thing' and/or 'do as reason dictates') can a Kantian hope to motivate people to follow a Kantian ethical judgement with the appropriate action.
To me, this result is unsatisfactory and counter intuitive. I would seek the basis of ethics in an evolutionarily sound fundamental principle that is self-motivating. Oddly enough, you can use such an evolutionary principle as a reply to your friend's challenge. You mentioned that your friend challenged you to come up with a universalizable ethical principle, and claimed 'If a principle cannot be universalised without contradiction it is not true and cannot be true. It may be an emotionally attractive principle and make you feel better, but it still isn't true.'
Try this one on your friend 'Act always so as to maximize the probability that your genes will maximally flourish over the longest time frame possible.' Of course, relying on this principle necessarily means that you have to dismiss Kant's maxim about means and ends. But the advantage of the principle is that it is evolutionarily sound, and nicely universalizable.
And contrary to a Kantian means-end motivated criticism, the principle does not imply a narrow-minded ego-centric blindness to the interests of other people. Given the social nature of our species, our genes tend to flourish better when we cooperate in mutually beneficial ways. It pays, history has taught us, to voluntarily cooperate with others rather than not. A social environment that is conducive to such cooperative efforts is better than one that is not. As a rule of thumb, therefore, it is better to respect the self-interests of others, than not. And we're back to Kant's mean-end principle albeit as a rule of thumb, rather than a categorical imperative.
How it applies to the particular scenario you laid out in your question is much more readily apparent than might be any Kantian analysis. Your (or rather your son's) judgement was that the social environment within which his genes must flourish would be better if he participated in the judicial treatment of the malefactors, than otherwise. By this ethical rule, therefore, your son did the right thing.
This is not, of course, a direct answer to your question. I don't care whether your friend was right or not in his interpretation of how the Kantian system of ethics should analyze your son's choice of actions. It is rather an indirect answer to your question I think your friend was applying the wrong ethical principles to your scenario. By applying the principles of evolutionary ethics, it is clear that your son's choice was the right one. And on this basis your friend's criticism was all wrong.
(5) Peter asked:
I was watching a video about a Festival, in which participants believe they are fairies in a past life. I thought that like all people they have the right to live their lives as they like, as long it is not harmful to others. That is debatable but by Western standards I think it reasonable. However I came to thought that although there is little valuable evidence that they are faeries but there is little evidence disproving it. So whether you think they are detached from reality is one thing but I think this applicable to many other situations in relation to delusions.
So in short if a reality is said to exist and there is no proof disproving or proving; where does one start? If there is a system that is beyond tangible interaction what proves or disproves it exists? Is this question too metaphysical or even relative to philosophy? If anything I need direction for this subject matter of realities and perceived realities; because I am still in HS and I don't have a class addressing such matters. Any reference would be nice and if this is not even a philosophical question, I apologize.
Not only is your question a philosophical one, but it is a most important one. In the past philosophers distinguished between phenomenal knowledge, known through the senses, and noumenal knowledge, known by the mind. Later, F. H. Bradley made the same distinction in his book 'Appearance and Reality.' And in modern science the same distinction is made between empirical knowledge and theoretical knowledge. The distinction arises, not because of delusions, but because of illusions. (Delusions are clearly false beliefs; illusions are false perceptions: the noumenal-phenomenal distinction again.) The important thing about illusions is that they are unreal; some of them are obviously so because of contradictions between different senses, as with the half-immersed stick in a glass of water, which is bent to the sight and straight to the touch; others are contractions between what you perceive and well established belief, as with the apparent sizes of the Sun and the Moon being equal. And no contradiction can be real, so illusions are unreal. If someone asked you to point to an empirical object that was wholly free of illusion, could you do it? And if you thought you could, how would you know it to be so? Or consider our most important sense, vision. Visible size diminishes with distance, in all three dimensions; shape varies with viewpoint; and colours are secondary qualities, manufactured by the eyes; what is left that is real?
So if the common sense belief in realism the belief that the empirical world that we each perceive around us is real is false (it is at best only partly real) then we have to speculate about the nature of reality. Such speculation gives us noumenal, or theoretical, knowledge of reality: knowledge that is empirically unknowable. And, also, knowledge that may easily be dead wrong. In the past such knowledge was called metaphysics but nowadays it is theoretical science. Theoretical science is strictly disciplined by having to conform to empirical data; and if you consider such theoretical ideas such as the curvature of four-dimensional space-time, the Big Bang, wave-particles, and the like, you realise that theoretical science is far removed from common sense.
Another point in all this is that noumenal ideas are invented in order to explain phenomenal knowledge: theoretical science explains what empirical science describes. Explanation is causal: to describe causes is to explain their effects. We need this because there are no empirical causes, only empirical correlations. All noumenal studies attempt to explain, by means of unperceived entities: myth by means of hidden spirits, theology, by means of God, metaphysics by means of substances and attributes, theoretical science by mathematical entities even common sense, by means of things that cannot be perceived, such as minds other than one's own and the continued existence of empirical objects when no one is perceiving them. If you ask a theoretical physicist what it is that theoretical physics describes, the usual answer is that it describes the underlying causes of empirical phenomena; and 'underlying' is a metaphor for non-empirical.
However, there are difficulties. The distinction between reality and appearance was in the past attributed to the representational theory of perception, which said that empirical objects are not real objects, they are only representations of real objects; and in so far as they are false representations, so are they illusory. Today this theory has become the causal theory of perception, strongly backed up by science: real objects cause images, or representations, of themselves in the brain of the perceiver. Real objects are outside the head of the perceiver, they are public, and they are material; while images are inside the perceiver's head, are private, and are mental. And all empirical objects are outside our heads, public, and material. Therefore empirical objects are real objects, not images of real objects. This latter view is called realism, or, sometimes, common sense realism, or, sometimes, naive realism. And for the past century it has dominated philosophy: English language philosophy has almost all been analytic philosophy, which disallows speculation, and continental philosophy, such as phenomenology and existentialism, has also been realistic. The trouble with realism is that it cannot account for the extraordinary success of theoretical science.
These difficulties can be resolved, but not in the space available here. If you would like to try to resolve them for yourself, try to answer two questions: is your own empirical body a real object, or just an image of one? And, if the latter, where is your real body? Alternatively, you could look at my e-book, 'Belief Shock,' downloadable free from www.sharebooks.ca.
(8) Benjamin asked:
True or False: The fact that we often have difficulty putting our thoughts into words disconfirms the view that we think in the language in which we speak.
False. The question isn't whether, in fact, we think in the language in which we speak (e.g. English) or whether we think in some other language (e.g. Jerry Fodor's 'language of thought'). That's a big debate, which we don't have to go into.
What Benjamin's question specifically asks is whether the fact that we often have difficulty putting our thoughts into words is compelling evidence against the view that we think in the language in which we speak, or, what amounts to the same thing, whether it is evidence for the view that we think in some other 'language'.
It is not. The standard reply is that, 'If you can't find the words, you don't have a clear thought.' There's something you think you are thinking about, a thought you are trying to think, but you haven't succeeded in actually thinking that thought. When you do finally find the words, then the thought comes into being and not before. All the feelings that you have prior to that point, the feeling of unease, of something tugging at your mind, or whatever it is, are just that and nothing more.
But how do I know that? I don't need to know. So far as one is able to tell purely from introspection, it might be true that a thought comes into being with the words that express it. It might also be true that the thought is prior to the words, but that's irrelevant. All one needs to defeat the claim in question is that it doesn't follow from the fact that we sometimes strain to express a thought that the thought is prior to the words.
It looks like I don't have much to write today. But actually, there's something tugging hard at my mind that tells me that this is all too superficial. I don't like it.
I read Fodor's book Language of Thought (1975) at the beginning of my first year as a graduate student at Oxford. His thesis seemed rather fanciful to me, not least because of what he said about Wittgenstein's argument against a private language. I'd just picked the book up at Blackwell's Bookshop because it looked interesting. It never occurred to me that it would generate the vast body of literature that it has. That fact doesn't make me feel the least bit sorry about my initial judgement.
However, I want to come at this from a different angle. I've lost my taste for the technical complexities of this debate, which takes in philosophy of language, philosophy of mind, cognitive science and AI.
At Oxford, there were a couple of other books which I read, by a philosopher who is hardly discussed today, Justus Buchler. (There's no Wikipedia entry, a telling sign.) The titles are The Nature of Judgement (1955) and Metaphysics of Natural Complexes (1966). I remember discussing what I'd read with my supervisor John McDowell who'd never heard of Buchler. But then no-one else I mentioned him to had either.
On Buchler's theory of judgement, a pole vaulter's leap, a painting, a skyscraper, or a sentence in English can all be called, without equivocation or metaphor, 'judgements'. When a pole vaulter vaults, or when an artist paints a painting, or when an architect designs a building, each is engaged in a thoughtful activity which exists side by side with, and in a sense independently of, the thoughtful activity of forming sentences in speech. But also more than just an 'activity'. The final result is an entity that exists in its own right, as a product of what went before, in the same sense that a verbal judgement is a final product of the activity or process of thinking.
Let's just try to imagine what it's like to be that pole vaulter. You've just failed the last jump, You were nearly over, but your left heel just caught the bar. What's going on in your mind? Lots of words, to be sure, perhaps a few swear words. But there's something else there too. As you feel the weight of the pole balancing in your palm, as you get ready to sprint, eyes fixed on the bar, the words that come into your head aren't the essential thing.
The run, the leap, the twist, every part of the choreographed movement is an action, which forms part of an articulated sequence of actions, just as words form parts of a sentence. Just as Frege held that the meaning of a word consists in its contribution to the meaning of a sentence or statement, whose aim is to state something true, so the 'meaning' of that particular vault depends upon its contribution to the attempt to clear the bar. The whole action, the 'judgement' succeeds or fails, just as a statement succeeds or fails in stating the truth.
Contemporary philosophers of mind and action probably wouldn't find too much here to argue with. However, what is important for me is the emphasis. Too much emphasis is placed by philosophers on the language question. What Buchler's account suggests is that there is a far greater richness to our mental life than the verbal thoughts we think. This extra component is not just 'experience' or 'feeling' but rather rational activity, a form of reasoning which exists apart from words, and which cannot be reduced to language.
To be sure, this 'rational activity' is not just some process in the head. If anything, it is far more obvious that doing a pole vault in your head isn't doing a pole vault even though the ability to imaginatively represent, accurately, the intended action or sequence of actions is part of what constitutes the pole vaulter's mastery of this particular field sport.
Could a creature who did not have verbal language 'reason' in this way? Here we are pulled different ways. It is human reason and judgement, expressed in words, that is involved in evaluating a piece of architecture or a work of art. In a similar way, in a diving contest the judges are able to defend, in words, the marks that they award. In a jump or a vault, on the other hand, success or failure is a simple verifiable fact. The bar is cleared or it is not cleared.
What that overlooks, however, is the fact that the ability to reason and form judgements in words is essential in an athlete's training. There is a science of sport. The quest for greater performance is, at least in part, a scientific endeavour.
So what does all this show about thought and language? One's initial reaction might be that Buchler has presented a clear case that there are forms of thinking or judging which do not involve words. There are other forms of ratiocination besides linguistic ratiocination. Perhaps no-one will ever write the definitive 'logic' of pole vaulting, but that merely reflects the unique capacity of language to make a particular and very important species of judgement linguistic judgement possible.
On second thoughts, surely what this shows is that we need to rephrase the question. Language is essential to linguistic or 'logical' thought. Other forms of thought require their own media whether it be the pole vaulter's body and pole, or the painter's eyes, hand and paint brush which are just as much part of our shared, common reality as words are.
(21) Kalyan asked:
I claim and proclaim to be an atheist as well as a skeptic rationalist. But then, my question, is it a contradiction in the sense that as a skeptic and a rationalist, I don't have enough evidence to prove my arguments as an atheist?
The short answer to Kalyan is that you can be an atheist while holding a reasoned skeptical stance ('reasoned' because your skepticism isn't either pathological or mere blind obstinacy) without believing yourself to be in a position to offer a proof that God does not exist. It suffices that you can offer arguments in favour of the view that atheism is the 'best explanation'.
'Best explanation for what?' is the question. The existence of a world (rather than no world) is one possible explanans, or thing to be explained. Another possible explanans is the existence of a Moral Law (if you believe in such a thing). But there are many more, maybe as many as there are views on the nature of the godhead.
I have never undergone the experience of a religious revelation. But supposing I did, would I be in a position to consider theism and atheism as alternative explanations and, moreover, choose atheism on the grounds that it provided a better explanation for my experience than atheism? Well, yes, that is what one has to say as an atheist. But I admit it sounds rather odd to say it. I can see a case of arguing that an experience wouldn't be the experience of religious revelation if you regarded it as possibly illusory. But then again, that problem doesn't arise if the explanans is another person's (alleged) religious revelation.
The idea that a scientific theory is an 'inference to the best explanation' goes back to the American philosopher of science C.S. Peirce who distinguished what he termed abduction from the process of Baconian induction. The idea was more recently revived by British philosopher of science Peter Lipton, and has become part of the vocabulary of contemporary analytic philosophy.
My University of London external students taking the BA Philosophy of Science module have been sending me essays on this topic, along the general theme, 'Is inference to the best explanation a distinctive kind of explanation?' I find Lipton's idea somewhat hazy, and yet there seems undoubtedly to be a core notion, which the God question illustrates nicely. You wouldn't seriously claim to have inductive evidence for atheism. Yet it seems to make perfect sense to say that atheism is a better explanation for any alleged evidence that a theist might put forward than theism.
According to Occam's Razor, other things being equal the better explanation is the one that posits fewer hypothetical entities. God is an unnecessary posit. Any explanation that does any work, works just as well without God directing things behind the scenes. That would be the moderate atheist view.
Enter Dawkins. In 1976, in my first year taking the Oxford B.Phil, there was a rumour going round that the redoubtable Gareth Evans was offering his undergraduate tutees and graduate students a free hardback copy of The Selfish Gene (which had been published that year) provided they promised to read it. With such a great testimonial, I could never bring myself to indulge in the fashionable Dawkins-bashing, despite Dawkins' somewhat embarrassing reductive views of the nature of philosophical inquiry, as a mere illustration of the theory of 'memes'.
Apropos of the meme theory, the Presocratic philosopher Xenophanes is the first recorded philosopher to employ a genetic argument against a religious claim:
Ethiopians say that their gods are snub-nosed and black, the Thracians that theirs have light blue eyes and red hair.
Kirk, Raven and Schofield The Presocratic Philosophers §168, p. 169
As Xenophanes must surely have realized, this isn't an argument that God cannot be black and have a snub nose. What the observed 'coincidence' shows, in our terms, is that the Ethiopians' reasoning to the best explanation is likely to have been somewhat biased. Having said that, if you believe that man is 'made in God's image' and your only experience of human beings is of people who are black and have snub noses, then it is surely reasonable to infer that God is black and has a snub nose.
However, by the same token, someone who had travelled a bit and discovered that different races have different physiognomies, would realize that this inference was not reasonable, and that any claim of 'resemblance' between God, or the gods, and man must allow for racial variation.
What this shows, if anything, is that you can undermine a purported inference to the best explanation by either pointing out grounds for possible suspicion of bias, and/or showing that the explanation relies on an impoverished evidential base. At any given time, however, the explanation remains in place until either a better explanation comes along, or the grounds for putting forward that explanation are undermined.
I would therefore be quite happy to accept that the belief that atheism is the best explanation for the existence of the world, or the phenomenon of religion or anything you like is a 'meme', in Dawkins' sense, whose evolutionary history goes back to the great historic clashes between established religion and the emerging sciences. That doesn't decide the question whether atheism is or isn't in fact 'the best explanation'.
But doesn't our very sense of what makes one explanation 'better' than another depend on prior conditioning, on the memes that have been transmitted to us? Is there a fact of the matter here? Couldn't we be completely wrong about what is or is not a good explanation?
For Dawkins, the spectacular success of science is a major consideration. The kinds of criticism that any scientific claim is subjected to by other scientists do not vindicate themselves (because the same argument can be run with 'the kinds of criticism that any theological claim is subjected to by other theologians'). However, the advantage science has over theology, is in its results. Religious belief has 'results' too, but the results arise from the belief its psychological effect on the believer rather than the truth of the belief: a vital distinction.
As I've said, it all depends on the explanans. Here, there is a nice finesse in that the atheist isn't the one who has to state what the explanation is intended to explain. Atheism is not a claim, but rather the denial of a claim. The onus is clearly on the one who makes the claim the one who asserts that God exists either to offer a proof, or, failing that, to justify the view that God's existence is a better explanation for XYZ, whatever 'XYZ' may be, than any alternative.
(28) Ben asked
Hi. I'm currently sitting a philosophy A level and I'm really struggling to comprehend soft determinism/ compatibilism. How can free will be compatible with determinism? Surely by definition they both necessitate exclusivity of each other.
According to the usually accepted definition of free will, determinism and free will are indeed mutually exclusive. However compatibilists have an alternative definition of free will which, they say, is compatible with determinism while still giving us 'free will worth having'
Determinism means that all events, including our acts, are consequences of the laws of nature plus previous events (ultimately in the distant past), so that everything which does happen must happen. It is inevitable. At no point could I have done otherwise than what I did. This is clearly incompatible with free will defined as the ability to have done otherwise, to have chosen to do something different, to have alternative possibilities (AP). So, by definition as you say,no free will (AP) if determinism is true.
The only ways of preserving our free will are: 1. Deny that the world is deterministic, and try to account for free will (AP) by a dash of indeterminism (quantum events and chaotic dynamics in the brain for example, or the 'self 'as a causal agent) without making such indeterminism mere randomness. This is what libertarians try to do (unsuccessfully in my view) but that's a side issue as regards your question. 2. Accept determinism (no AP)but define free will as being able to act freely even though you have no AP. This is what compatibilists do. But is this 'free will worth having'?
Clearly it's worth being able to do what you want (to act voluntarily) rather than being compelled at gunpoint to do something else, or prevented from doing what you want by being in jail. But this is to confuse voluntary action with free will. John Locke had no such confusion. In his Essay Concerning Human Understanding (4th ed 1699) he speaks of a man being carried asleep into a room and the door being locked. The man awakes, finds the company in the room most congenial and has no wish to leave. Here, Locke says, the man acts voluntarily in staying in the room, but yet has no liberty (he cant leave the room if he wanted to). Some compatibilists try to maintain that because the man chooses or wants to stay, he exercises free will even though (unknown to him) he can't do otherwise.
Frankfurt and others have devised ever more elaborate scenarios where the lock is in the brain rather than in the door (mad scientists remotely monitor your brain and can stop you doing anything they don't like, but, as it happens, they never have to intervene because everything you do voluntarily is OK by them) purporting to show free will without AP. But, whether the brain is monitored or not monitored, actions, including voluntary ones, are still determined. Another argument used is that you could do otherwise if you wanted to, but you don't want to. Of course not, I say, because what you want is determined, and couldn't be otherwise. Compatibilism agrees with this point but holds that you are 'able' to do otherwise in the sense that, had the past or the laws of nature been different, you might have done otherwise. But how is this an 'ability'? It can never be exercised.
So, free will, defined as having AP, is indeed incompatible with determinism, and, defined as acting freely without AP, merely amounts, in my view,to voluntary action which is nevertheless deterministic
If you still want free will, have a look at the libertarians arguments.
If they don't convince you, get off the age old merry-go-round about freewill, accept it's an illusion (very compelling I admit) and think about two sorts of question 1. What's the mechanism of this illusion? Do all humans have it? Would all self-conscious entities in the universe feel they have free will?. Will future computers think they do? 2. How should we live knowing (or at least justifiably believing) we have no free will? (no praise or blame of course but no moral indignation or recriminations either; still right and wrong, approval/disapproval; still quarantining dangerous criminals to protect the public).
Best of luck with your A level(s).
(32) Asia asked:
Do holes really exist or are they pockets of non-existence?
Whoa! I know someone who would love this question my erstwhile student and Pathways mentor Brian Tee. Brian got his MA in Philosophy from The University of Sheffield and now owns a bookshop in Sheffield a nice job for a philosopher. I have to apologize to Asia in advance because Brian would have been able to give a much better answer than me. But I can only try my best.
I remember having a three hour discussion on the philosophical topic of holes with Brian while downing pints of Easy Rider at The Sheaf View pub, just up the road from my office. John Riley, another ex-student who designed the banner for The Ten Big Questions was also there. The discussion was sparked off when Brian pointed to the absence of beer in his glass and reminded me that it was my turn to buy a round.
How can an absence be something? As any beer drinker knows, the absence of beer in your glass is a very serious matter which needs to be rectified as soon as possible. Somehow, that got us onto the topic of holes.
Let's say that holes undoubtedly exist. Then what is a hole?
Consider a hole in a wall. (I think that was my bright idea.) A hole is something you can climb through: an opportunity (if you are trying to get to the other side of the wall) or a threat (if you are trying to prevent someone from getting to the other side of the wall). However, a hole say, a gap in the brickwork isn't a hole in the wall if it is too small (then it's a crack another concept that one could look at), or if air is blasting through at a sufficiently powerful rate, or if it contains a guillotine designed to chop you in half if you try to climb through.
Chicken wire is full of 'holes', but a hole in a chicken wire fence is a matter of concern to the farmer, especially if there are foxes about. Here again, what does or does not count as a hole is relative to the function or purpose of a given item.
Is a hole a thing? Consider the holes in Emmental ('Swiss') cheese. If you bought some Emmental at the supermarket and then discovered that it didn't have any holes, you'd have the right to complain: the cheese may taste the same, but it isn't Emmental without the holes. You'd miss the peculiar pleasure of exploring the holes with your tongue as you bite into the cheese. Visual appearance is also very important. In this and in many other cases, holes are a positive aesthetic feature.
However, so far we are merely skirting round the issue. Talk of the 'functional' or 'aesthetic' role of holes merely underlines the reasons why we take a practical interest in these strange objects. The philosophical question, however, is what holes are, ontologically speaking.
From the point of view of logic, to say that a hole is a 'something' is to assert that it is an 'entity with an identity' in P.F. Strawson's sense: an object of reference whose persistence and identity conditions are sufficiently well defined to enable a speaker and hearer to identify it as the 'same again' on different occasions and say things about it.
One of the things we discussed in the pub was Sartre's discussion of 'the absence of Pierre'. I'm waiting in a coffee bar for Pierre but Pierre hasn't shown up. Wherever I look, Pierre is not in my field of vision. In terms of Gestalt psychology, I perceive the cafe not just as general scenery but as a ground on which I am expecting a figure to appear. All the details fade into a more or less uniform blur. And yet what I perceive is not merely a blur but something positive, Pierre's absence.
To perceive a hole is to perceive a gestalt, a 'figure' on a 'ground'. But, equally, to perceive the absence of a hole is to perceive a gestalt. The hole searched for is not there.
Frege or Russell would say that the absent Pierre isn't a peculiar kind of object inhabiting the 'realm of non-existence'. Rather, the statement, 'Pierre is not here' can be analysed in first-order predicate calculus as, 'For all x, if x is in the cafe, then x is not equal to Pierre', or, analysing proper names à la Quine, 'For all x, if x is in the cafe then x does not have the property of being-Pierre'.
However, this response still fails to address the question why absences, or holes, are philosophically interesting, and indeed why Sartre sees the very notion of 'nothing' or 'nothingness' as having deep phenomenological or metaphysical significance. You don't have to believe that holes are 'made of' a special kind of non-existent stuff, or think of holes as 'pockets of non-existence' in order to sense that holes are somehow problematic and disturbing.
Assume that a hole is, as stated above, an 'entity with an identity'. Holes which meet this criterion are like things, and yet they lack many of the essential qualities of things. Holes lack the defining properties of a 'substance' in Aristotle's sense. In Lockean terms, holes do not have 'primary qualities' from which their 'secondary qualities' flow.
And yet holes are like things in that they have a natural life, a natural history. Consider the hole in a sock, which starts off as a broken thread and then gradually grows and grows until your heel sticks through. The holes in Emmental are produced by a biochemical reaction, their distribution and size is carefully controlled by the precise conditions under which the cheese is manufactured. And yet they are not made of anything. They contain pure carbon dioxide but they are not made of carbon dioxide, any more than a hole in a brick wall is made of air.
Just like physical objects, holes can combine and merge. Two small holes in a sock can gradually grow until they merge and become a bigger hole. Equally, holes can be divided up. Adding a few strands of wire fixes the 'hole' in the chicken wire fence. From one point of view, the larger hole has been divided up into smaller holes, but, as we have seen, the smaller holes are not holes in the fence, which as a result of the timely repair is once more an effective fence, sufficient to keep the foxes out.
In the pub we also considered the idea that the edge or rim of the hole constitutes is actual, physical presence. It is true that in describing the precise dimensions of the rim you have described the dimensions of the hole. And yet logically the rim, qua physical stuff, cannot be a constituent element or part of the hole, because you can fill the hole in (e.g. a hole in a wall) without in any way changing the material properties of the the rim. Equally, if I mark a chalk circle where I plan to cut a hole in the wall, I have defined a potential rim which in a sense actually exists (as physical material) and yet does not yet exist just as for the sculptor the statue already 'exists' in the uncarved stone.
Come to think of it, what is it that one 'sees' in the hunk of stone?
Last week, I was kicking around possible designs for a new web page, ISFP Publishing. The idea is to help unknown authors promote books on philosophy. Somehow, I gravitated towards the idea that the background should look like old paper. I found something very nice on Flickr. But still, there seemed to be something missing. Then the idea came to me from I don't know where that what the page needed was a fly, crawling across the paper. The people I've shown the page to agreed that the fly was just right and nothing else would do. But how did I know this, from just staring at the space where a fly was not? What did I see?
However, I think there's something else that needs to be emphasized, something to do specifically with our psychological attitude to holes in particular, which does not apply to absences or lacks generally.
As a matter of physical fact, our bodies are porous (from the Greek poros, passage or pore). The human body is made of, defined by, its holes. (Something about this reminds me of Tantric philosophy.) Through these passages and channels, information and physical material flows in and out. The miracle of reproduction is the most impressive example.
The very notion of perception involves the idea of holes or channels whereby information is conveyed into our minds from the external world, through the eyes, ears, nose. To be receptive to experience is essential to our connectedness with the world and our surrounding environment, as indeed it is to our capacity to communicate with one another. Yet equally important is the role of holes in relation to physical needs, the need to breathe, eat etc.
Last time, I strayed into Freudian territory in talking about 'male' and 'female' aspects of the impulse to philosophize. Leaving aside the differences between the sexes, the discovery that one has an anus as well as a mouth, must be a momentous event for the human infant.
All of which leads me to conclude that what makes the topic of holes so enticing is not just one thing but a potent combination of factors.
Well, those are some more or less jumbled thoughts. Holes exist. But there is no single, definitive way of stating what makes something a hole. It depends on your point of view, or interest. And I've tried to explain why holes are so 'interesting'. If there is a core or real essence to the 'philosophical problem of holes', I don't think I've found it. Maybe you will, Asia, if you keep looking. Or ask Brian.
(33) Martin asked:
If a meritocracy is a type of society where people are rewarded in line with their intelligence or ability, has there ever been talk of a type of society where morals and good will are rewarded? Could this type of society be made possible and how?
Martin, I must say that I find this a most interesting question. That is, since on the surface there appears to be little difference between the two 'types of society' you mention, on examination, it can be argued, the difference may be quite profound. The apparent similarity is in the fact that, in a moral society, one holds to the moral or ethical values of the community because one believes that this is the manner in which one, as a member of such a community, should conduct oneself; whereas, in a meritocracy it is considered, in the main, advantageous to oneself merely to be seen to behave in an upright and moral way.
Edith Stein, in her essay 'The Individual and Community', deals very well with this issue where she describes the difference between what she calls 'the community man' and the 'association man'. Before anything else, says Stein, if you want to understand in what sense you can talk about the universe of sentient reality into which the lone psyche fits as a member, you have to clarify a determinate form of the living together of individual persons. Where one person approaches another as subject to object, examines her, 'deals with' her methodically on the basis of knowledge obtained, and coaxes the intended reaction out of her; they are living together in association. Conversely, where the subject accepts the other as a subject and does not confront him but rather lives with him and is determined by the stirrings of his life, they are forming a community with one an other. In the association, everyone is absolutely alone, a 'windowless monad', as Leibniz might say, whereas in the community, solidarity prevails. Thus, whereas a society founded solely on meritocracy can lead to isolation, alienation and loneliness, the reward for a society grounded in moral values and good will is personal security and social stability.
In her essay, Stein takes the demagogue (a popular leader who appeals to the baser emotions of his people) as the purest example of the 'association man' who wants to make a crowd of people subservient for his own purposes. The bond of solidarity is severed between him and those who are objects of his 'treatment'. However, because subjectivity is the object of the association man (because he wants to make the people his 'subjects'), he needs the posture of a community man as an epistemological expedient (it serves his purpose to gain the reputation of a 'community man'). Stein identifies the 'association man' as an 'observer'. What distinguishes the observer from the spontaneous participant is that the observer rationally takes advantage of what community life offers him and in doing so he uses his 'intelligence and ability' to create the guise of a moral agent for his own personal merit. As a type of Machiavellian figure, he passes over from spontaneous experiencing into a wary posture, he makes everyone else's inwardness into an object instead of immediately 'reaching' to it, and he exploits the knowledge of it for the purpose of his transactions. On the other hand, the 'genuine' man of the people puts himself at the service of the people out of a natural predisposition. What counts for him are the wishes, needs, and the interests of the people, which he allows to affect him directly as a community man. However, whilst the 'impression' he makes is unintentional, once he becomes conscious of his function as a leader in the community, he is put in the position of having to study people in order to be able to guide them correctly. Still, it is possible for him to fulfil this role without passing over to the association posture. Thus, community is possible without association, but association is not possible without community.
Stein distinguishes genuine community or society from other kinds of unions amongst people. In common with the difference between the 'association man' and the 'community man', she holds that the principal distinction is between community (Gemeinschaft) and association (Gesellschaft) is that communities are founded on organic relations between individuals, whereas associations are based on more artificial unions. In contrast to communities who are focused on the well-being of all their members, associations are focused on certain goals and the means by which to attain these goals. Notwithstanding the distinction between association and community, few alliances are pure associations or pure communities, but a combination of both However, pure communities are possible, whilst pure associations are not.
(For more on this see Stein, by Sarah Borden, 2003, pp 47-64).
(37) Krai asked:
I am not clear about the difference between idealism and realism.
Could you please give the essence of the two isms/ concepts.
First, a caveat. 'Idealism' and 'realism' are labels that are employed in many different areas of philosophy with varying degrees of 'term-of-art'ness. Since you ask your question in the absence of any particular context, I am going to assume that you mean these terms in their generalized metaphysical sense. This is the context of widest general employment of these terms, and involves the least 'term-of-art'ness.
In the history of Philosophy, there are two quite distinct traditions about the nature of the relationship between 'the Self' and what we think we perceive what we think is real. They are the Idealist, or the 'Inside-Out' tradition and the Realist or 'Outside-In' tradition. (I like the more descriptive labels. I feel they are less confusing, since the Idealist/Realist dichotomy is used in many different ways and many different places within philosophy.)
(1) The 'Inside-Out' tradition is best exemplified by the famous quote from Rene Descartes 'Cogito, ergo sum!' 'I think, therefore I am!' Philosophers of this tradition start with the incontestable premise that 'I think', and deduce from that the inescapable conclusion that consciousness is the fundamental given of metaphysics. Their argument is that to deny the premise 'I think', or that 'I am conscious' is a logical contradiction. The very fact that one is denying it necessitates that one is thinking and is conscious thus invalidating the proposition.
However appealing this approach is, it suffers from one fatal flaw that no philosopher has ever managed to bridge. Philosophers of the Inside-Out tradition maintain that our modes of consciousness and cognition modify or process the sensory inputs, so that what our consciousness is aware of as sensory evidence must be regarded as the products of our consciousness rather than unbiased evidence of reality. In that event, goes the inescapable logical conclusion, either we can know nothing about the nature of an alleged external reality, or anything that we can know about such an alleged external reality must be provided through other means than our senses.
There is no logical line of reasoning that can proceed from the basic premise that consciousness is the fundamental given of metaphysics, to the conclusion that there is a reality outside of one's own consciousness. Since there is no way to validate the evidence of the senses, there is no basis from which to conclude that the sensory evidence is valid. Philosophers of the Inside-Out tradition are therefore forced to conclude that all that is perceived, as well as all the contents of consciousness is actively created by the nature of consciousness the 'Self'. As it is impossible, therefore, to logically derive the existence of an external reality, there can be no logical foundation for any constraints on the nature of the contents of a particular person's consciousness.
So we have philosophers like Berkeley who argue that there is no external reality. What we think of as 'reality' is but ideas in some consciousness specifically God's consciousness. And we have Kant who argues that our understanding of the noumenal world (the un-perceivable and unknowable reality that is the foundation beneath our sensory perceptions) is governed by the structure of our consciousness.
The proponents of the 'Inside-Out' line of reasoning support their arguments with examples and analyses based on evidence from the senses. Which is, of course, a logical contradiction since they argue that the evidence from the senses cannot be trusted. They assume that consciousness, as prior and primary to the sensory evidence, must generate our understanding from the evidence of our senses. Since this understanding is not a pure product of our senses, therefore what we understand about our sensory perceptions cannot be trusted as evidence of an objective reality.
Therefore, there can be no logical necessity for any standardization or similarity of the contents of consciousness from one person to another. In fact, there can be no logical necessity that there exists anything other than one's own consciousness. Any suggestion that there exists a reality, or that there exists other minds, is founded on untrustworthy evidence from the senses. The purest version of Idealism inescapably drives the logic towards Solipsism. And the only escape is to posit some unsupported additional premise (like Berkeley's addition of God) that can provide a loop-hole.
Because it denies the existence of any form of objective reality, the Inside-Out tradition logically results in 'Subjectivist' notions of Truth, Knowledge, and Ethics. The philosophy of Kant is perhaps the pinnacle of this school of thought.
There is also a sub-tradition maintained by those philosophers who start with the same 'Inside-Out' premise, but despair over the subjective consequences and proclaim the 'Nihilist' school Truth, Knowledge and Ethics are impossible, illogical, and invalid pursuits for inquiry. The once popular school of 'Logical Positivists' are more or less of this school. Which is probably a good explanation why Philosophy and Philosophers as topics of popular awareness are in such ill repute.
(2) The Outside-In tradition is best exemplified by Aristotle. Philosophers of this tradition start with the premise that thinking and consciousness are processes not things. By the very nature of what a process is, in order for a process to 'exist' (be in the process of processing) there must be something that is being processed. To think is self-evidently to think *about* something. To be conscious is to be conscious *of* something.
Philosophers of this tradition start with this premise and acknowledge that by the nature of processes there must first be something about which I can think or of which I can be conscious, and deduce the inescapable conclusion that the existence of something is the fundamental given of metaphysics. The argument is that to deny the existence of something is a logical contradiction. The very fact that one is denying that something exists necessitates that one is thinking about and is conscious of something thus invalidating the proposition. (By the act of thinking, one demonstrates that the thing that is thinking, and the thing that it is thinking about, both exist.) This argument is most succinctly (if not most cogently) expressed in the basic axiom of Randian Objectivism 'Existence exists'.
Start with the premise of a reality that exists (i.e. is 'real') as the fundamental given of metaphysics. Add to that the realization that if thinking and consciousness are processes that are about and of reality, then reality must exist prior to and independent of those processes. You can't have a process in operation, without something being processed. You can't be conscious, without being conscious of something. But a process is not necessary for the existence of something. Thus the premise of a reality that exists as the object of the process of thinking and consciousness, necessitates that reality is objective and independent of those processes.
If reality is not 'real' (ie. objectively existent), then the information provided by our senses is not a valid basis upon which to base conclusions about the nature of Reality. For Reality to be other than 'real', would mean it would have to be 'un-real' (non-objective and/or non-existent). And 'unreal' means just that something is imaginary, or ideal, or constituted by our consciousness.
The approach that is more in keeping with 'Common Sense' is the view that 'out there' is not 'in here'. That there is a reality that is outside oneself, that does not respond to the whims and notions of one's conscious attention, and that does not disappear when one's consciousness is focused elsewhere. If reality is 'real', then the information provided by our senses is a valid basis upon which to base conclusions about the nature of Reality.
There are numerous writers of the Outside-In (Realist) school of philosophy, beginning with Aristotle, who have written excellent expositions on the 'real' and 'objective' nature of Reality. Among the more recent of these are Ayn Rand, David Kelley, and William P. Alston. I can do no better than refer you to the works of one of these authors. They have done a much better job than I could possibly do, and at far greater length than this text would permit.
(38) Zachary asked:
I was wondering can you have inalienable rights without the existence of god and if so how?
Everything depends on just what is meant by the words 'inalienable rights'.
According to the various online dictionaries I checked, 'inalienable' is an adjective that according to common usage means 'incapable of being repudiated or transferred to another; not subject to forfeiture; protected from being removed or taken away; unable to be removed.'
The word 'rights' is especially problematic, since it is so widely used and abused. According to those online dictionaries I checked, even as a noun, the word has many different meanings depending on the context of usage. Here is a selection of common usages that would be appropriate in the context of 'inalienable rights'. A 'right' is a noun that according to common usage means 'something claimed to be due by moral principle: that which is morally good or in accordance with accepted principles of justice, fairness, and honesty; that which is just, morally good, legal, proper, or fitting; anything in accord with principles of justice; an abstract idea of that which is due to a person or governmental body by law or tradition or nature; the interest possessed by law or custom in some intangible thing; a justified claim or entitlement, or the freedom to do something.'
Putting the two definitions together, and you get quite a mouth-full. I'll simplify things a bit, and shorten this mouth-full down to an 'inalienable right' is a 'morally or legally justified claim or entitlement that cannot be removed, repudiated, or forfeited'.
In political philosophy, the term 'inalienable rights' is used to refer to the concept of rights that are inseparable from those to whom they belong. The rights are presumed an inherent part of one's existence (as a person, or as a moral agent, or as a citizen, or as a resident depending on who is doing the presuming). Some supporters of the idea of inalienable rights believe that these are not granted by any human authority, but rather are present in all human beings regardless of whether they are acknowledged or not. Other supporters maintain that these rights can only be granted by human agency.
Based on my simplified definition (or even on the more expansive mouth-full), it becomes clear that you can indeed have inalienable rights without the existence of God. God only enters the picture if you (a) restrict the definition to 'morally justified claim or entitlement', and (b) maintain that God is a (or one, or the only) source of moral principles.
It is possible to argue that people have some selection of inalienable right simply in virtue of their being people. In other words, as a logical consequence of being a self-conscious animal, you have a 'morally justified claim or entitlement that cannot be removed, repudiated, or forfeited' to certain freedoms and liberties. As but one trivial example, simply in virtue of you being conscious, you have the inalienable right to think as you choose. You might not be able to do anything about what you think, but no one can remove, repudiate, or forfeit the ability to think as you choose. Or, as another example, an inalienable right to pursue your own happiness. Although you might not actually be able to do anything (you might be in chains in prison), you can at least pursue your goal. Supporters of this class of inalienable rights, call them 'natural rights' in virtue of their argument that they stem from the nature of Man, rather than from the works (laws) of Man.
Alternatively, it is also possible to argue that people only have some selection of inalienable rights in virtue of the laws that govern where they reside. For example, a person might acquire an inalienable right to 'liberty' within a legal environment wherein the term 'liberty' has been given some specific definition, and provided with such protections that it cannot be constrained, removed, repudiated, or forfeited. Supporters of this class of inalienable rights, call them 'legal rights' in virtue of their argument that they stem from the works (laws) of Man, rather than from the nature of Man. Those who maintain that the only inalienable rights are legal rights argue that there is nothing inherent in the nature of Man that provides any moral justification for 'claims or entitlements that cannot be removed, repudiated, or forfeited.' In the long run, evolutionary survival is the only fact that matters, and survival is a matter of tooth-and-nail struggle.
(42) Penny asked:
This is a political philosophy question about the incompatibility of national sovereignty and international institutions such as the UN, EU, treaty commitments and the legitimacy (or not) of enforcement mechanisms. Im sorry its so long.
For my entire adult life I have been a strong supporter of the UN and international law as the best hope to prevent and mitigate wars and help bring about, if not perfect global peace, harmony and justice, at least a reduction of conflict and more peaceful coexistence. I dislike nationalism, and particularly superpatriotism, which seem to me one of the principal causes of conflict, and have looked forward to the decreasing importance of nation states.
Now, since I've developed an amateur interest in philosophy and ethics, I discover that national sovereignty is seen by many as key to human progress and civilisation since at least the Enlightenment; that it is inalienable and by definition supreme, meaning that states cannot relinquish any part of their sovereignty, thereby destroying any claim to legitimacy of international law (and the courts to enforce it). I read, too, that while states have the authority to make treaties and sign up to conventions if they wish, they can also break them at will if that suits, and that no other state or institution has (or can have) legitimate authority to prevent them, or penalise them for doing so (or even, it seems, have grounds to criticise them, since states are not moral agents).
So when those of us who were against the Iraq war complained that it was a war of aggression, or we cite the Geneva Conventions (rather than basic morality) on the treatment of prisoners, or the Law of the Sea when unarmed passengers are killed on ships in international waters, or the discriminatory application of the Nuclear NonProliferation Treaty, or we welcome the establishment of the ICC, apparently we haven't a philosophical leg to stand on.
If nothing short of a world state (inevitably oppressive and therefore far from desirable) can legitimately override national sovereignty, what is to be done? Are we stuck forever with a Hobbesian state of nature in the international arena, where the strongest countries can generally expect to prevail over the wishes and needs of the weakest, backed by the threat of superior brute force?
I was warned that studying philosophy would force me to rethink some of my fundamental beliefs, which was true and is stimulating, but Im finding this very hard to come to terms with. Is there a way round or over the sovereignty stumbling block to greater global justice, a philosophical route to legitimacy for what I think of as progressive international institutions?
Despair not. Earnest idealism, such as even this decrepit correspondent once had, will only lead to progress if it takes the time to understand the problem and an understanding of the problem will only lead to progress with a dose of earnest idealism. You show a promising and may I say rather unusual combination of the two, Kant's precedent notwithstanding.
Indeed, the wish of international law is not the fact of it, and the tendency with progressive journalism and right-thinking persons generally has been, lately, to pretend that it is. You have shaken yourself out of the pretence. Well done. There is something earnest and hopeful about the pretence, which is aware of itself as at least an exaggeration, but which imagines that by the sheer force of prayer we could make order and law in the world by believing in it. And although that is not enough, there is something in the faith of it which is necessary all the same, in as much as we will not make law and order in the world by having the kind of black faith in Power that the Nazis had. But it does not follow, as some leader writers seem to think it follows, that to fight that black idiocy we are obliged to leap to the barricades for any bright foolishness.
The obvious hard case, and the occasion for much warranted and unwarranted idealism, is the European Union. For the EU is not merely a treaty, but a treaty *process*, in which much of our hopes and material interests are invested. As Germany experienced in the 19th Century, a treaty process in pursuit of a customs union (zollverein) can, by stages, effect a political union. As we include other nations in our decision making process so we include them in one state. For what is a state, if not a system for deciding the regulation and policing of markets and exchange? At least, this is the question posed by those hopeful of what you call a philosophical route to legitimacy
for international law.
But, as becomes evident when the EU hits one of its periodic political hitches, the trouble with 'a philosophical route to legitimacy' is that it is just that. And there is much at stake in the idea of a nation that is not in the least bit intellectual or philosophical. A state is not simply a device for securing one's rational best interests. A nation state develops a kind of collective Ego, however nebulous, which no 'philosophical route to legitimacy' can quite touch.
Like you, I wish that it could. But as it strikes me that our efforts would be better directed at building some new common identity than at forcing diverse old identities to comply with a 'philosophical route to legitimacy'. The successful international political entities of the past have done both, in varying proportions. May states have been, along the way to their Pax Romana, pretty bloody, and the hope of the EU is that it offers a bloodless kind of unification. But there is an obvious sense in which the old pattern, despite the hopes of internationalists, has not quit the scene. Neither the UN nor the EU made the space in which they try to grow. Roosevelt and Truman, and all the allied forces, did that.
Yes, Penny, there is a way. But to understand that way, and to understand why the currently popular concept of national sovereignty seems to be such a stumbling block, you are going to have to recognize that some of your more cherished moral premises are without foundation. (And, of course, if you wish to follow the way that I am presenting here, you are going to have to join the very few of us who are fighting to teach the general population that some of their most cherished moral premises are also without foundation.)
Most people today, including most philosophers today, labour under the premise that there are three different aspects to doing the 'right thing'. First, there is 'things as they are' in your question you provide the examples that nationalism causes violence, and governments of nations focus on parochial self-interest. Second there is 'things as they should be' and you provide the examples of global peace, harmony, justice, a reduction of conflict and more peaceful coexistence. And third, there are the 'ethical/moral principles' which, if we all would only adhere to them, would get us from 'things as they are' to 'things as they should be'.
You describe the 'things as they should be' in positive terms (naturally). You have passed a value judgement on the 'things as should be', and you have judged that they are 'good'. (Obviously, since 'should' and 'good' go together.) You have in all probability inherited a suite of moral tenets from the general Judeo-Christian-Islamic tradition that says that peace, harmony, justice, absence of conflict, and peaceful coexistence are good things. But do you understand why they are considered good things? Do you have any foundation behind the judgement that such things are good things? Have you thought this through yourself, or have you simply adopted the moral tenets of your environment?
You describe the 'things as they are' in negative terms. You look forward to the decreasing importance of nation states as a way of reducing the principal causes of conflict. You complained that the Iraqi war was a war of aggression. And so forth. It is a reasonable assumption, then, that you view the way that things are is somehow not desirable. You have passed a value judgement on the 'things as they are', and you have judged that they are 'not good'. But you have made such a judgement (as most people do) on the basis of those moral tenets for which you have no foundation.
The central difficulty that you are facing, is that the concept of 'morally good' has lost its anchor. Most people (including most philosophers) use the concept without really understanding its meaning. As a result, we find ourselves in a situation where moral disagreements become one person's opinion versus another. And the winner is the person who yells the loudest (or most persuasively), or carries the biggest stick. In our modern culture the loudest yellers are to be found in the church pulpits, and the biggest sticks are to be found in the government legislatures (or what passes for a legislature in a non-democracy). Lacking any reason to change things, and any interest in finding one, they have reinforced the moral tenets of their ancestors, without understanding their basis. They are commandments with no underlying authority. The only way that people will follow such commandments, is if they are persuaded that to do so is a good thing. There is no way to persuade someone who does not agree with you. There are no reasons you can give someone to justify the commandments other than 'Do as I say, Or else!!'
It used to be, in ancient times, that the foundation of morality was located in the 'telos' (to use a Greek word) of Man. To Aristotle, for example, a 'good' person fulfilled his/her proper function well. And a person had a proper function he was a husband, father, son, or she was a mother, daughter, wife, farmer, fisherman, warrior, or citizen of the state. Each of these roles defined a well understood functional requirement that a person had to fulfill well to be called 'good' at it. The concept of 'good' was a functional concept. The concept of fulfilling a function well was a matter of factual description. Factual descriptions of how a person fulfilled the functions justified the labels of 'good'. There was no dichotomy between 'is' and 'ought'. If she is a ship's captain, then she ought to do those things that would constitute being a good ship's captain. If she is doing those things that constitute being a good ship's captain, then what she is doing is 'good', and 'a good thing'.
In the Dark Ages, the 'telos' of Aristotle gave way to the 'telos' of God. The source of the function changed, but the basic functional foundation of moral tenets did not. God handed down moral commandments so that we could properly fulfill our function within his grand design. A 'good thing' was something conducive to the fulfillment of God's purpose.
During the Age of Enlightenment (1637-1815) we lost the concept of a proper function of Man. Aristotle was discredited for various reasons, and with him went his 'telos'. God was dethroned, and with her went our 'telos' within her design. But the language of morals did not reflect this historical evolution. So we have lost the basis for our moral tenets. Why is being honest a 'good thing'? Why is justice a 'good thing'? Is it really just a matter of opinion?
What is needed is a new 'telos' for Man that can act as the foundation for a renewed understanding of moral language, moral judgements, moral rules. And the science of genetics has given us that new telos. Genetics tells us that the function of the individual organism (any organism, of any species) is to ensure the replication and flourishing of the genes that encode the recipe that is the organism. This, then, gives us a new functional description of Man. And it provides the basis for a renewed functional understanding of 'good' and 'moral'. That is good and moral that tends, on average and in the long run, to promote the proliferation and flourishing of our genes. That action or choice is good or moral that in our best judgement will most likely promote the proliferation and flourishing of our genes over the long term.
Now, with that basic principle, we can go back and examine those 'things as they are' and see if indeed they are as bad as you initially judged. Wars and conflict, national sovereignty, breaking of treaties and conventions, and so forth, can be morally necessary if they are most likely, in our best judgement, to promote the proliferation and flourishing of our genes over the long term. But this is not a 'free ride' ticket to do just as we please, or whatever might seem in our short term interests. As an empirical observation, Man is a social species. We tend to flourish best when we cooperate in a social environment free of coercion, and free of chaos. So peace, harmony, justice, reduction of conflict and peaceful coexistence are also 'good things'. But so is nationalism to some degree.
The world out there is over-populated with characters who would employ coercion to expropriate what we have without compensation. (They aren't only just out there, of course. We have our share of home-grown thieves and extortionists including many in government. But the focus at the moment is on nationalism.) The only defence that we, as individuals, have against such expropriation is our mutual cooperation in self-defence. It started out with families and tribes, grew to city-state, and then to nation-states. The point is that within the boundaries of the nation-state, people are assumed to enjoy mutual cooperation and to (more or less) voluntarily renounce resort to coercion. Those outside the nation-state boundaries are assumed to not adhere to this 'civility'.
The growth of international organizations and international agreements reflects the trend to find common ground with others in other nation-states in some areas. We all recognize that cooperation for mutual benefit is better than conflict. But we need our guarantees that 'those others' will not resort to coercion to expropriate our wealth. And I should emphasize that this attitude is universal, and active at all levels from the individual to the nation-state itself. It is a natural consequence of our new 'telos'. (It is a natural consequence of our genes in action.) An altruistic concern for the welfare of others, at the possible expense of our own, is self-genocidal and 'morally bad'. A certain amount of xenophobia is a natural and rational self-defence mechanism. (Which is not to suggest that an unreasoned xenophobia makes any sense. Archie Bunker lost out on a lot of things he could have gained by fair-trading with his 'unacceptable' neighbours.) So there is no incompatibility between national sovereignty and international institutions. International institutions are just the manifestation of the growing extent to which we find international cooperation to our benefit, while protecting ourselves from the coercive threats out there. Nation-states, being the embodiment of our mutual cooperation in self-defence, will only decrease in importance as the threats of coercion out there decrease.
(What I personally do see occurring is a shrinking in the effective size of nation-states as the necessary population mass required to ensure self-defence decreases with the shrinking of external sources of coercion. The world's population is no longer really faced with major acquisitive nation-states.)
The philosophical leg that you are looking to stand on, is 'intelligent self-interest'. You will get nowhere as long as you simply proclaim it your opinion that adhering to the Geneva Conventions is the better way to treat prisoners, or that the Law of the Sea should prevent the killing of unarmed passengers in international waters, or that application of the Nuclear NonProliferation Treaty is discriminatory, or that the establishment of the ICC is a good thing. What you need to do is show people how it is in their individual best interests to adhere to the Geneva Conventions, or the Law of the Sea, etc. With a functionally based understanding of basic morality, appealing to a person's self-interest is the proper moral approach.
Finally, I will conclude with a few words on 'super-patriotism' and 'extremism'. There are two different sources for such behaviour. One is simple ignorance. Some people think that they are right and we are wrong, and their morality permits them to employ coercion to attain their ends. They are ignorant of the empirical evidence that strongly demonstrates that whatever your goal, you are far more likely to attain it through voluntary cooperation than you are through coercion. The other source is moral abdication. A lot of people are persuaded (by charismatic religious or political orators) to abdicate their moral responsibility, and let others make the moral judgments for them. Once they abdicate their responsibility to themselves, they become easy pickings for suicide bomber recruiters and other such extremist operators.
If we taught the proper functional meaning of 'moral good' in the schools, we would be faced with a lot fewer people who have abdicated their moral responsibility to themselves. It may not have prevented the Iraqi war (we can have a separate argument as to whether it was a morally necessary war), but it would certainly change the face of modern politics. And it would certainly eliminate such home-grown abominations as religious fanatics.
Hope you found these few thoughts enlightening. I look forward to whatever comments you may wish to offer in reply.
The question of how the actions of nation states can be subject to law is the most urgent question of our times. It is, above all, a practical question. If the United Nations and the Security Council are not sufficiently effective to deter or prevent wars of aggression then we should be figuring out ways of making them more effective. Which is of course exactly what political thinkers and political leaders have been doing. If we succeeded, would it really matter if this went against some treasured philosophical principle? I don't think so.
Sovereignty is essential, as Hobbes argued in Leviathan because in the absence of a sovereign to whom one cedes the power to enforce law, there can be no justice and no law except the law of the jungle, the war of 'all against all'. But Hobbes also argued with perfect consistency that a monarch, ruling alone, is the only effective sovereign. As soon as you introduce limitations to the power of the monarch a parliament for example the problems that the idea of a sovereign was introduced to solve break out all over again.
The problem is encapsulated in the famous example of the Prisoners' Dilemma. Of all the many game-theoretic strategies that have been explored, Hobbes' solution is the only one that guarantees the an agreement or contract will be honoured by both parties because they are answerable, not just to one another but to a third party who has the unfettered power to punish infractions with lethal force. The third party, once appointed, cannot be unappointed. That's what ensures no backsliding on the deal.
No-one accepts this today in the political arena. Why not? Logically, Hobbes' argument is unassailable. To absolutely guarantee peace, the humble acquiescence of every subject to the law of the land, nothing less than the absolute power of a dictator is required. The problem is, kings and dictators have an awkward tendency to behave in way which is not necessarily aimed at the good of their subjects. (But that's OK, because they will face the judgement of God.)
Having made the experiment, human nations have settled for less. We have a political system I'm talking about liberal democracy although you could say similar things about other political systems which works for the most part in maintaining the peace of the nation. Bad things still happen. There are political stalemates when we need urgent political action; the police force struggles to stay on top of the crime rate; civil disobedience and strikes throw their spanner in the works.
'Thank goodness that they do,' would be a reasonable response. Can you imagine what kind of state it would be, where the decree of the ruler was absolute, where every crime and misdemeanour was instantly punished? Vid screens in every room just like in 1984. You drop a piece of chewing gum on the pavement and Whooof! off you go in a puff of smoke. (Although I know a few people who would agree to that.)
So my argument would be, if we are prepared to compromise the logic of Hobbes' response to the prisoners' dilemma for the sake of practicality, then what this means, in effect, is an admission that the idea of a 'sovereign' is a fiction. It may be, as many believe, an indispensable fiction, but it is a fiction nonetheless. I recognize the law of the land, by and large, but there are cases where my conscience, or just urgent practical need, overrides respect for the law. One drives through the occasional red light.
The United Nations is a building in New York. It is also a fiction. It doesn't exist except in the minds of the political leaders who founded it and the delegates who attend it. Belief that the UN can work is necessary in order to make it work. And it has worked, by and large; at least one can argue that world affairs would have been in a far worse state without it.
There isn't a question of what may or may not 'legitimately' override national sovereignty from a philosophical standpoint. If a resolution is passed by the UN, then it is legitimate, because that's just what the member states have subscribed to. Of course, the real world being what it is, resolutions fail to be implemented, just as national laws fail to be observed. Punishments and sanctions only deter in proportion to their severity: that's a problem for national law as well as for international law.
But can't philosophers figure something out? Insofar as this is a problem for game theory, you need game theorists; insofar as this is a problem of practical politics, you need political scientists. Maybe somewhere in there, is a role for utopian dreamers. (The League of Nations was once a utopian dream. It's failure led to the UN.)
The most intractable problems of our time require more than a number-crunching or logic-crunching response; they require originality, creativity. Something new, at any rate. I do wonder whether there is any meaningful role for political philosophy. You want 'philosophical legitimacy' for international law? You've got it. What we want is just to make international law more effective, without it hurting too much. Maybe that just shows the colour of my philosophical creed (for want of a better word, call it pragmatism with a small 'p').
(45) Lois asked:
There are situations where the pursuit of our own happiness and peace of mind conflicts with that of another. Must we always put the interests of others before our own? Is there any justification for pursuing one's own welfare at the expense of someone who stands in the way of our goal?
This question came in a while ago, and I wasn't going to answer it. Other Ask a Philosopher panel members have already had a go, and I couldn't really see that I had anything to add. (Lois didn't provide an email address so she'll have to wait rather a long time, I'm afraid until the next series of Questions and Answers is posted.)
But something happened to make me look at this question again. (It's not something I want to talk about here.) The thought occurred to me that pursuing this question from Lois can take you into a very dark place indeed.
But let's start off with the more obvious points that a moral philosopher would make.
I can think of two clear cases, which few would dispute, where in the one case it was perfectly reasonable to put oneself before another; while in the other case one has a clear obligation to put the other person before oneself.
Let's say you are one of two shortlisted candidates for a well paid executive position, waiting to be interviewed. This is the first time you have reached the short list after scores of unsuccessful job applications.
Your stomach churns as you realize how much depends on how you perform in this interview. A divorced mother of three. You are behind with your mortgage payments, and you and your children are threatened with eviction from the home they have lived in all of their lives. Your age is against you, and it was only pure luck that you managed to get this far in the selection process.
The other candidate catches your eye. 'How long do you think we're going to have to wait?' You mumble something in reply. But the other woman needs to talk so you listen. You listen with a growing sense of amazement to her story about her husband who cheated on her with his personal trainer, her subsequent divorce, her three young children and how far she is behind with her mortgage payments. She could be you. She has as much to gain, or to lose, as you have yourself.
What should you do? There's no question. You go for the job. In the interview you fight for your happiness and the happiness of your children. You fight for all your lives.
Our moral intuitions tell us at least, my moral intuitions tell me that in a situation of fair, or even not so fair competition such as the one I have described, there has to be a winner and a loser. You have every right to strive to win with all your might, even though as a necessary consequence the other must lose. Until human beings finally succeed in creating Utopia, that's the nature of the society we live in.
I've painted this in black and white colours, but it is not just an isolated, extreme example. There are many, many ways in which human beings have to fight for their happiness and peace of mind, knowing that there will inevitably be winners and losers in the game of life. Of course, you can do your best to help those less fortunate, give generously to charity and good causes. But if it was wrong to compete in the first place, then charity and good deeds would merely be a salve to ease one's guilty conscience.
In the example I have just given, it could be objected that I was unfairly raising the stakes as each candidate was naturally concerned for the well-being of her children. I don't think that's the crucial point, however. My original idea was to have two not-so young but single Philosophy PhDs competing for an academic post. (I can sympathize, but not that many would.) Exactly the same considerations apply. One is destined for a life in academia and the realization of all his or her dreams, the other will end up as a bank manager. And both believe this is the very last chance for either of them.
But what about a parent's duty to one's child? Isn't that the clearest case where one has an obligation to put the happiness of others before one's own? The very definition of a 'bad mother' or 'bad father' is a person who refuses to do this. Again, I'm relying on moral intuition, but I expect the majority of parents would agree. It's a cliché, but clichés are often true, that parenthood is a sustained and bloody exercise in self-sacrifice.
Well, I could go on to talk about all the cases in between, where we are pulled both ways, towards wanting to say that one has an obligation to put the other first, and saying that one is justified in putting oneself first. Or, I could delve into moral theory in order to account for these alleged intuitions: what would a utilitarian say? or a Kantian deontologist? or a virtue ethicist? or an evolutionary biologist?
But I leave that as an exercise.
What concerns me is a disturbing vibe that I get with this question. Our 'happiness and peace of mind' is at stake. What would one not do for the sake of one's happiness and peace of mind? As a parent, you can't be happy if your children are unhappy. And if there really is no prospect that one will ever attain happiness, wouldn't it be better just to end it all?
And to think that you could be happy, were it not for the one person standing in your way!
What you would say to the the mother of three who fails to get the job is that it isn't the end of the world. OK, so you get evicted from your home. That's terrible. But people survive worse, and they end up making good lives for themselves. Or to the disappointed PhD, one would remind them that they still have their life ahead of them, there are other ways to pursue one's interest in philosophy besides paid employment in a university.
When do we not think this? When are we absolutely and utterly convinced that unless XYZ happens, our happiness and peace of mind will be gone forever, never to return? Love would be pretty high on the list. But not the only item. It could be a political cause that you have dedicated your whole life to. Or something as banal and unidealistic as the mistaken belief that you can only be happy having lots and lots of money.
Which brings us to that dark place, which popular films and TV dramas love to explore.
In Lois' question, there was a nice vagueness in the idea of doing something 'at the expense' of another. One naturally assumes that we are dealing with a tit-for-tat situation. What one stands to win, the other stands to lose. But there's no logical reason for this assumption. That is the way a murderer thinks too.
(53) Shanna asked:
What is the relationship between common sense moral intuitions and Moral philosophy?
Whilst in philosophical discourse generalisations are best avoided, it seems fair to say that the premise upon which most, if not all, moral codes are based is the principle that we should do unto others only that which we would have others do unto us. The issue, for me, that Shanna's question raises is whether this principle derives from nature or from nurture: that is, whether it is something that derives its moral values from worldly experience or whether they are values given to us as a priori intuitions, ideas or concepts.
It should be said from the outset that this oft debated, yet never quite resolved question, occupies different schools of philosophical thought and one's conclusion depends on which of these schools of thought one finds most convincing. Amongst these differing or opposing approaches is that advanced by John Locke (1632-1704) who, echoing Aristotle, held that there is nothing in the mind that is not first in the senses. According to this view, the mind, at birth is a tabula rasa, a blank slate upon which experience will write its moral and other codes of behaviour. For Locke, there are no a priori, innate ideas or concepts of the world before we have experience of it. Against this view was the Enlightenment belief that man was inherently good, and that evil was the result of the pollution of innocence by corrupt social institutions: organised religion and politics. Another, not unrelated approach takes place between Empiricism and Rationalism. Empiricists, of whom David Hume may be considered one of its most notable adherents, as is the case with Locke, argue that all our ideas and concepts derive from experience, whilst Rationalists, such as Descartes, take the view that there are, within the mind, certain a priori ideas that do not depend on empirical experience. Somewhere in between these opposing views is Kant's argument that 'though all our knowledge begins with experience, it by no means follows that all arises out of it'. It seems that Immanuel Kant (1724-1804), a committed rationalist, disturbed by the argument, set out by David Hume (1711-1776) in his An Enquiry Concerning Human Understanding (1781), that we know the mind only as we know matter: by perception, declared that he was awakened from his dogmatic slumber by his contemporary's argument that experience is the basis for knowledge. Whilst the aforementioned quotation is taken from Kant's Critique of Pure Reason, and refers to his view that before worldly experience there is within the mind the a priory instinct of space and time and the concept of cause and effect which enable the mind to put perceive things given to them in experience, not as things in themselves, as noumena, but as phenomena, as things as they appear to human consciousness, it might equally be taken as an expression of his moral philosophy as expressed in two of his other major works, Foundations of the Metaphysics of Morals (1785), and his Critique of Practical Reason (1788), two works in which Kant deals with a common-sense conception of morality based on what he calls the categorical imperative. By 'categorical' Kant means that it applies in all situations, and by 'imperative' he means that it is commanding and thus absolutely authoritative.
Although Kant offers several formulations of the categorical imperative, the two that are most often quoted is the one which states that one should always act in such a way that one is able at the same time to will that the maxim of one's action be in accordance with a universal law of nature, and the other (and the one most relevant to the issue at hand) is the one which states that one should treat humanity, whether in one's own person or that of anybody else, never merely as a means but always also as an end. What Kant is saying is that inherent in human reason is the capacity to determine that which is right and that which is wrong. Thus, in the same way that Kant argues that there is, within the mind, before experience, the a priory forms of intuition, space and time and the concept of cause and effect, so too does the mind contain the innate ability to discern, through practical reason, the difference between right and wrong. If one accepts Kant's categorical imperative, one can say that common sense moral intuitions are the foundation stone of moral philosophy.
However, whilst the arguments set out both in the Critique of Pure Reason and the Critique of Practical Reason and the Foundations of the Metaphysics of Morals may appear laudable enough in their own right, it should be said that in both his attempt to forge a synthesis between analytic propositions and synthetic propositions in the former and his attempt to lay foundations of moral philosophy in the latter, are found wanting. In the former, in attempting to show that there are propositions which appear to be synthetic: drawn from experience, are in fact a priori (hence the term 'synthetic a priori propositions), he succeeded only in showing that mathematical formulations, such as 7+5 = 12, fit into this category. In the latter, whilst he shows that the religious argument that we should treat others as ourselves can be shown to be in accord with human reason, experience shows us that the moral conclusion of the 'categorical imperative' is not one that is universally held. That is, that there is no universal consensus on human rights; on the right to free speech; on the right to life, or on issue of abortion in general; there is no consensus on the issue of euthanasia, on the right of same sex couples to marriage, or to form civil partnerships; and there is no consensus on the right to health care, on education or the right to bear arms.
Thus, we find that, notwithstanding Kant's 'Copernican Revolution' the human mind is not privileged with knowledge of a transcendent deity, freedom, or 'things in themselves'. In fact we can say that there are no inalienable or universal rights applicable at all times and in all places; there is no Socratic daimon whispering moral imperatives into the corporeal ear; nor is there a Cartesian homunculus with a morality compass steering the soul through the turbulent waters of life. Moral codes are not given a priori from some transcendent or metaphysical realm, rather they derive from worldly experience. If there is a relationship between common sense and moral values it is a relationship that manifests itself as the instinct for survival: an instinct the drives us to devise 'moral' laws that allow us to survive and thrive in an alien world. As Thomas Hobbes (1588-1679), in his magnum opus, Leviathan, says, it is through self-interest that man enters into a social contract with his fellow beings. It is by entering into this compact that allows man to move from a 'state of nature', a state in which the life of the individual is 'solitary, poor, nasty, brutish, and short', to a 'state of peace'. It is through common sense and the instinct for self preservation that encourages men in the 'state of nature' to hand over the reins of power to a sovereign who can in turn impose and enforce certain moral codes of behaviour that can guarantee that each can exist in a society without fear or danger from any other man. Moral laws, then, do not depend on innate moral intuitions, for if they did there would be no need for a sovereign power to impose such laws on the populace, or to employ forces to ensure that these codes of practice are not broken.
The Italian philosopher, Giambattista Vico (1668-1744), agrees with Hobbes that moral order derives from common sense. However, rather than common sense of the individual, Vico argues that it is common sense in the form of communal sense the sensus communis of the entire community. For Vico moral codes of behaviour were first introduced when early men, more beast than human, in an effort to appease the anger of an (imagined) anthropomorphic deity, felt compelled to regulate their lives by introducing the institutions of religion, marriage, and property (an institution first introduced as a right of the bodies of dead to be interred rather than left, as had previously been the case, to rot or decay above ground). It is because moral imperatives derive from the collective common sense of the community, that they cannot be said to hold across all time, but change t as needs demand. Moral guardians, or as Vico calls them, 'theological poets' for all their claims, are not divinely inspired people with privileged access to the wishes or demands of a transcendent deity, rather they are conduits through which the collective will of the people finds a voice.
In closing, it seems that whilst nature decrees that intuitions, in the forms of space and time, have a role to play in how we perceive our world, somewhat paradoxically, it does not decree that intuitions have a role in the formulation of ethical values.
(54) Alistair asked:
I'm working through the exercises in a book on logic ('Logic' by Wilfrid Hodges) as part of an effort to study philosophy more formally. I have long been fascinated by the project that began with Frege and culminated in Godel, partly from a historical standpoint. Also, I intend to study Russell and Wittgenstein, so a basic understanding of logic seems essential.
However, my main interest is philosophy. I am not particularly interested in going too deeply into logic itself, into, for example, its applications in computer science and linguistics. So my question is, what use is it to philosophy (rather than computer science etc.), and what advantage for philosophy does modern post Frege logic have over Aristotle's logic?
Is it that modern logic is thought to subsume or replace Aristotle? Hodges' book takes in propositional calculus, semantic tableaux and predicate logic, but seems to make little mention of syllogisms or how to detect common logical fallacies (or not in a form I recognize, at least). I would have thought that these things were a more useful foundation for philosophy than formal languages. And even if, technically speaking, modern logic does indeed cover everything in Aristotle, with a lot more besides, is it even appropriate to apply mathematics to an activity that takes place through language?
My concern is that, in trying to learn philosophy, I will be wasting my time if I get seduced by all those exotic symbols. I doubt that it is going to help me make sense of the Critique of Pure Reason. Does it make sense to teach this kind of logic as part of the teaching of philosophy? After all, it came to be regarded as a kind of philosophy only because of Russell, whose project to place knowledge on firm foundations is now widely held to have been a failure, as shown by Godel (in logic) and Wittgenstein (in philosophy).
Why does mathematical logic have pride of place, when practical philosophy mostly does very well with older concepts such as syllogism, circular argument, begging the question, reductio absurdum, and so on?
Alastair you have some very wrong ideas about philosophy. Philosophy is not a subject that progresses so that we can say Godel and Wittgenstein showed that Russell was wrong and everyone accepts that. In philosophy there are no generally accepted truths and everything is always open to debate.
You have now got to a stage where you are faced with having to do something difficult and your response to this is to say 'Can't I just skip this bit and concentrate on the things that I find easier to grasp'.
You mention 'practical philosophy' but there is no such thing as practical philosophy, there is only philosophy. However (there is always a however) you are trying to study philosophy on your own so going too deeply into formal logic may not be the best place to start. However at some stage in your studies you will need to have a grasp of what propositional calculus is and what predicate calculus is. You will also need a firm understanding of what a valid argument,what a tautology is and what a contradiction is and of how logic can be an axiomatic system and what concepts such as completeness and consistency mean when applied to an axiomatic system.
It is an unfortunate fact that to do philosophy well you need to know everything about everything but of course none of us do know this. I don't know the logic book you mention so I can't say if it is a good book for a beginner or not. If at some stage you decide to continue your studies of logic then feel free to ask as many questions about it as you need to.
Alistair, a long question, but a good one. First, a brief answer:
1. There is indeed no need for fluency in the formal languages of logic in order to study and understand philosophy 2. Critical thinking in philosophy and in everyday life is indeed better served by informal logic 3. I think that the infection of much 20th Century analytic philosophy writings by logical symbolism is past its peak, and philosophical logic is emerging with new vigour from decades of debility due to that infection.
To amplify each point:
1. Even the teachers of analytic philosophy recognize that expertise in formal logic is unnecessary. Thus, the 2010 study guide for the BA (Philosophy), Uni of London, says in its blurb about the compulsory (Philosophical) Logic module 'Formal logic does not figure as such in the examination..., but some knowledge of elementary formal logic is necessary for the subject as a whole'. It then goes on to recommend a 'gentle introduction' to formal logic in Guttenplan's book. But Hodges will serve as well. and you have not wasted time in reading it and doing its exercises by way of your gentle introduction. As a student of philosophy your focus will be analysis of and reflection on concepts that arise out of/ are built into logic and reasoning validity, identity, necessity, truth, reference, definite descriptions, conditionals. In addition you may wish to reflect on the reasons for and value of nonstandard logics which deny bivalence or deny Aristotle's laws of thought such as LEM or even LNC. Also a basic understanding of non-bivalent, including fuzzy, logic is needed to understand the concept of vagueness. I can say from personal experience that a decent grasp of philosophy, including excellent marks in Uni exams in Phil of Maths and Phil of Science, is possible with virtually no knowledge of formal logic.
2. The last 30 years or so has seen development of 'informal logic' allied to the teaching of Critical Thinking in schools and universities. The online Stanford Encyclopaedia of Philosophy entry on 'Informal Logic'(2007) is a good source.
3. The love affair between analytic philosophy and logical symbolism blossomed with the publication in 1905 of the Theory of Descriptions in Russell's 'On Denoting' (that 'paradigm of philosophy' as Ramsey called it in 1931). Russell was widely seen as ushering in a new age of rigour many old philosophical problems would simply be shown up as confusions of thought; wooly Continental metaphysics was exposed; Meinong's supposed metaphysical excesses were driven out of ontology. And indeed it was a shot in the arm to philosophy, and a paradigm in the sense later articulated by Kuhn. Our view is more nuanced these days -there's much to be said for Meinong, and the Frege/Russell strictly logical approach to language (semantics or what the words mean) was soon seen to be inadequate,failing, among other things,to grasp the pragmatics (what the speaker intends) so key to natural language. But, at any rate, Russell couched his theory in the symbolism of his (and Whitehead's) Principia Mathematica, starting the trend of discussing such matters in terms of this symbolism when they can be understood without it (of course it is true that some people find it easier to grasp symbolically, and indeed I think I am one of these people).
Finally, and incidentally, I think the main Russell/Frege project was to reduce maths entirely to logic. They failed, but only because it is impossible, and they nevertheless reduced arithmetic entirely to logic supplemented only by what Frege rather generously referred to as Hume's principle (the idea of equinumerosity as one-to-one correspondence)
All the best with your studies.
(55) Roxane asked:
Real happiness is helping others. Who among philosophers in the next would you best attribute that principle?
Whilst there are many philosophers, such as Hannah Arendt and Edith Stein, who could be said to fit into this category, the first that comes to my mind is Emmanuel Levinas.
For Martin Heidegger philosophy is essentially a philosophy which seeks to place man in his context with the world, and only incidentally tells us what are or ought to be the relations between one person and the other in society. For Emmanuel Levinas, however, one's ethical relation to the other takes precedence over one's relation to oneself. For Levinas, the other is absolutely other: beyond comprehension beyond complete understanding. Face to face with the other, the self is obliged to put responsibility for the other before oneself; for this reason the relation puts the self in the position of hostage: the self becomes slave to the other. Thus, Levinas expounds, or advocates, an ethics of obligation and self-sacrifice to the other.
Levinas was born in Lithuania to Jewish parents. He moved to France in 1923. His philosophy is directly related to his experiences during World War 11. His family died in the Holocaust and, as a French citizen and soldier, he became a prisoner of war in Germany. While his was in the prison camp, where he was forced to perform labour, his wife and daughter were kept hidden in a French monastery until his release.
One of Levinas's main ambitions is to attempt to describe a relation to another person that cannot be reduced to understanding. He finds this in what he calls the 'face-to-face' relation. What Levinas means by face-to-face relation, paradoxically, is not a relation of perception or vision but a linguistic connection. The face is not something we see but something we communicate with. When I am communicating with another person I am not reflecting on the other, but actively engaging in relation where I am focussed on the person in front of me. I am not contemplating, I am conversing. Levinas's point is that unless my social interactions with others are underpinned by ethical relations I am in danger of failing to acknowledge the humanity of the other.
Ethics, for Levinas, is the critical questioning of the liberty, spontaneity and cognitive enterprise of the ego to reduce all otherness to itself. What this means is that it questions the authenticity of the ego, the self, and its tendency to reduce others to understandable objects. The term Levinas uses for the ego, or the self, is 'Same'. The 'same' refers not only to subjective thoughts, but also to the objects of those thoughts: not only to the domain or world of the ego or self, but the domain of others. Levinas's ego is not the Cartesian ego: not some homunculus existing in a solipsist vacuum separated from the body and unsure of the validity of the reality of others, but an ego whose distance from others fades into insignificance.
For Levinas, as I have said, the ego the 'self'- is not some Cartesian homunculus, but an embodied being of flesh and blood, a being capable of hunger, which eats and enjoys eating. 'Only a being who eats, says Levinas, ' can be for the other'. What he means by this is that only a being that knows hunger and enjoys eating can understand what it means to give its bread to others from out of its own mouth. Levinas's ethics, then, is not some deontological obligation to universalise maxims, but an appeal to allow one's subjectivity to remain open to, what Simon Critchley calls, 'the pangs of both hunger and eros [love]'. Subjectivity, he says, is not Descartes' 'ego cogito' (I am, I think), rather it is the declaration that 'Here I am!' And as an existential being I am obliged to answer to the call of the other. Ethics, my responsibility to the other, says Levinas, begins and ends with me.
Levinas' 'big idea' is that the relation to the other cannot be reduced to understanding and that this relation is ethical. For Levinas, it is our empathy or sympathy for the other that takes precedence over our own needs. The argument that may be put against this is 'how can we really know what the other is experiencing or feeling? ' That is, if the other tells me he/she is in pain, sad, or in need of something, how can I know with certainty that the other is being truthful? The answer, of course, is that I cannot. However, in fairness to Levinas, he never claims that his 'big idea' will lead to a full understanding of the other. What he is concerned with is reminding us of our moral obligations to the other that is, that by quite ordinary acts of civility, hospitality, and kindness, we can make the world of the other a better place to be in.
(59) Nick asked:
Hello, I'm happy I get the chance to ask a philosopher, as I don't meet too many in daily life. The question I'm asking now, is no other than the one about spiritual meaning of humans! I don't expect a brain to be able to understand itself, but one of my recent discoveries which is close to the field of psychology is the similarity between stories like Santa Clause and Religion. Both are passed by from generation to other without someone thinking to its consequences. When as a little child I found out that Santa close did not exist I had the feelings, many children have, of frustration. I believe in my heart was a feeling of being lied to, rather than not receiving a gift again on Christmas. Recently I genuinely applied same rationality about religion: having heard different opinions of different nations on this world, I wonder who Jesus really was, as I was born a Christian, and this is the main model character that I know in details. Should I believe Jesus resurrected from dead? How much of bible is made by or influenced by man? How can I know the truth while being surrounded by people who tell lies? I'll stop here, I hope you understand my concerns and wait to listen to your view on religion.
This is an interesting question which could signal the beginning of a journey that may occupy you for many years to come. A journey, that is, that begins, as you so rightly infer, when one starts to doubt the veracity of beliefs that heretofore one has accepted without question, and more significantly, beliefs that one has been encouraged to accept without question quite often from people who do not know that these ideas are false because they have been indoctrinated into the same belief system.
The first thing that should be said is that, whilst you might prefer a more direct answer to your question whilst others may offer a more direct response, it is my view, in this instance, that this issue might be best addressed by a different, and more circuitous, route, for there are some issues, particularly in philosophy, that one must deal with in one's own way, at one's own pace, in one's own time, and when one is better prepared to accept the conclusions of one's findings a preparedness, it can be said, that comes only after a good deal of study, reflection, and ultimately a readiness to sacrifice: to let go of, long and deeply held beliefs that one has come to realise are no longer tenable.
Thus, rather than presenting my views on issues such as 'the spiritual meaning of humans', who Jesus really was; whether or not he rose from the dead; or how much of the Bible was influenced by man, or on religion itself, I would encourage you to seek answers to these questions through a combination of conscientious study and reflection and it is in relation to these issues that the isfp, the International Society for Philosophers (in that, as its mission statement infers, it comes to the table with no other agenda than a love of philosophy, and the desire to provide a forum for all those, amateur and professional, who share this love), can play a vital role. For it is in studying the works of others concerned with the same issues; by discussing these issues with others of like mind; by reflecting on these issues with an open mind, and by being prepared to reappraise, to re-visit, re-evaluate, and where necessary, to let go of ideas and beliefs that you no longer find sustainable, that you will come to find your own answers to these often difficult and complex questions.
However, whilst, in this case, I think it best to let you work on these issues yourself, I believe I may be able to point to some areas of study that might help you begin your investigation into this interesting issue. I should begin by saying that the Santa Claus example you give is most appropriate in that, for some, religion is for adults what Santa is for kids. The frustration and disappointment that children feel on learning that the Santa is a myth results on learning this truth prematurely before the mind has time to reason it out for itself. Whilst it can be argued that it is a myth built on the identity of a person that once existed, this can be of little comfort to the traumatised child at the time if you spot an analogy here, it is not unintentional.
Let us really begin by looking at the derivation of the terms 'religion' and 'philosophy'. Patrick Quinn informs us that the religion derives from the Latin religio meaning 'to bind' and signifies belief in or obedience and sensitivity to the sacred, which is conceived to consist of a supernatural power or set of powers regarded as divine and having control over human destiny, whilst is taken from the Greek philos, (love) or philia (affinity for or attraction towards) and Sophia (wisdom, knowledge) ( see Philosophy of Religion A-Z, 2005, p.180).
Thus, immediately we see that where religion 'binds' one to a particular belief or set of beliefs given or imposed by some transcendent entity, implicit in the definition philosophy is the view that the search and acquisition of wisdom and knowledge is more in the hands of the individual. Religion, then, involves the belief in the existence of a transcendent entity that has the power to control and determine the course of all events in the cosmos. Being religious involves adhering unwaveringly to the laws, tenets, and injunctions of the system of belief to which one is aligned whether unwittingly or not. Religion does not involve a love of wisdom and knowledge, nor does it encourage the questioning of beliefs or 'truths' handed down by religious tradition, rather it demands obedience to the set of beliefs it holds have been revealed to it by an omniscient, omnipotent and unseen god. The demarcation point for religious enquiry, where it exists, is that God exists and that all knowledge and truth necessary to human existence has been, or will be, given in revelation. Philosophy, for religion, is seen as a useful method, a tool, for showing that that which it holds to be true can be validated by reason. And this can be said to be the crux of the matter, for whilst philosophy is concerned with many of the issues that concern religion the proof of the existence of God, a priori truths, and so on, it is not, and never should be, dogmatic.
As Christianity is the religion to which you refer in your question, and that with which I am most familiar, it is the one that will occupy this discussion. With this in mind, can I suggest, in moving further to a resolution to the issue(s) you raise, you could question the historical accuracy of the Old and New Testaments; you could look at the arguments both for and against teachings of Augustus, who saw philosophy as a continuance of religion, and of Boethius, who saw philosophy, in the form of Athena, as offering him 'consolation' in the face of his impending death. You might consider the pros and cons of Aquinas's proofs of the existence of God, and of St Anselm, who, like Descartes, held that if one could conceive of a perfect being (God) then this prefect being must necessarily exist. You could look at the how certain tenets became incorporated into Church law through the Council of Nicea; of the different and many forms of Christianity that existed before this event, of the role of Arius played in the introduction of Nicene Creed, and the expressions of faith contained therein. You might look the role the Inquisition and the Index of Prohibited Books played in the suppression of reasoned arguments against the teachings of the Church (as well as the connection between these institutions and the current Congregarion for the Doctrine of the Faith), and of the treatment of such thinkers as Copernicus, Galileo, Bruno and many others who rejected the 'truths' imposed on them by the Church Fathers.
As with your question, I will stop here, for there is enough in the above to help you on your way in resolving the issues contained in your question.
I would like to finish by drawing on a popular Italian saying 'chi va piano, va lontano e sano', which translates something like 'one who goes slowly, goes far and well'. So Nick, or George, make haste slowly, and travel well.
(60) Lucy asked:
If pragmatic considerations show it is irrational not to believe in the principle of induction, do they also show it is irrational not to believe in God?
Mmm, it looks like Lucy is asking us to do her homework for her. This has all the hallmarks of an assignment or essay question. But unlike some we receive on Ask a Philosopher, this one is not that bad. How much help my answer is going to be is another question.
Two things ought to scream out at you when you see the phrase 'pragmatic justification of induction' (by the way, you'll find loads of pages if you search for this in Google):
The first point is, how on earth am I going to be persuaded by a pragmatic argument that belief in induction 'works in practice' or 'leads to practical benefits' if I'm not already committed to induction? In that respect, a pragmatic justification of induction is in exactly the same quandary as an inductive justification of induction. Just because induction works fine for you, or just because it has worked for me in the past, is no reason for me to believe that it will work for me now unless I have already accepted that inductive reasoning is reasonable.
The second point has to do with the allegedly modest idea of a merely 'pragmatic' belief. Suppose I accept that induction 'works' (or has worked for me in the past, or has worked for you); is that supposed to be a true statement, or only something which it is useful to believe? If I state that it is merely useful to believe the statement just made, is that a claim to truth, or am I merely saying that it is useful to believe that it is useful to believe... and so on.
This is all very well covered ground as you will discover if you do an internet search. In any event, the idea of a 'pragmatic justification of induction' has at least two major points of uncertainty/ instability before we even go on to consider the even more explosive idea of an inductive proof of the existence of God.
(In my last post, I described myself as a 'pragmatist with a small 'p''. Perhaps, one should make clear that the background to this question is most definitely Pragmatism with a big 'P', I'm talking in particular about the philosophies of C.S. Peirce and William James.)
The Pragmatist may object at this point that I have willfully misinterpreted the pragmatic case for induction. We are not concerned with anything so abstract as the 'definition of truth' (although this more ambitious thesis is what James attempted in Pragmatism, 1907), but rather the question of how one ought to behave, or, equivalently, what makes behaviour 'rational' or 'irrational'. When I avoid putting my hand in a pot of boiling water in order to stir the spaghetti, I am not considering what would be a 'true statement' concerning the effect of a temperature of 100 degrees Centigrade on living human tissue. Rather, I am simply avoiding doing something which I know to be harmful. The knowledge in question is practical knowledge. It is something you just don't do, without having to think about it first.
We navigate our way through through the world, avoiding myriads of dangers large and small, choosing intelligently without pausing to reflect on that choice. This is part of what it is to 'be rational'. You wouldn't call someone rational who only did the rational thing when prompted to think about it, but the rest of the time behaved in a more or less random way.
This also disposes of the objection that a pragmatic justification of induction presupposes inductive reasoning. The whole point of the pragmatic 'turn' is to halt the threatened regress of an inductive argument for preferring induction. At a certain point, thinking comes to an end and we just act. The capacity to learn from experience (which is basically all that induction amounts to) is an intrinsic part of the capacity to make intelligent choices, whether or not these choices are reflected upon.
I'm prepared to buy all this, just for the sake of Lucy's question. I should add, however, that I don't really like the idea that induction is something we just 'have' to believe, come what may. There are principles which it definitely pays to believe even though they are apparently counter-inductive. One is Sod's Law: If something can go wrong, it will go wrong. If you estimate the chance of something going wrong with your plan, your estimate however rationally based, however carefully you have sifted all the relevant inductive considerations will always be too optimistic. Another well attested counter-inductive principle (which I don't have a name for) is that Good Things Never Last. On the basis of induction, rationally it oughtn't to make a difference whether you are onto a 'Good Thing' or not, but in practice it just does.
But maybe that just shows what a pessimist I am. Maybe (to be really clever, if not cute about this) you could make an inductively based case for pessimism, on the grounds that it offers a necessary rational corrective to the natural human tendency to be over-optimistic.
However, this is merely delaying the real question: whether an useful analogy or, better still, inference can be drawn between a pragmatic justification of induction and a pragmatic justification of theism.
On the face of it, there's a huge disanalogy, a massive non-sequitur. You say belief in God works for you. I say non-belief in God works for me. If you didn't believe in God, you say, your life just wouldn't be worth living. My response is that if I believed in God, my life would become hell. There would be no place far away enough or deep enough to hide.
Instead of the happy-clappy belief that 'God will always love me' or 'God is on my side', I prefer the honesty of good old-fashioned Catholicism. When you die, you can expect to spend 1000 years in Purgatory (according to one book I came across it's a grimly fascinating subject for debate), going over every aspect of your life, inch by inch, until you are thoroughly cleansed and prepared for everlasting life in Heaven. Lovely.
The idea of being a 'God-fearing man' has this aspect of truth about it. As Geach (a Roman Catholic) says in his defence of Divine Command theory (see my post on Plato's Euthyphro) to defy God is the very definition of insanity. For my part, I couldn't live with that fear looming over me. The fire and brimstone preachers had the right idea: What the Hell are you smiling for?
However, you will say that I have just conceded the Pragmatist's case, by demonstrating that I am prepared to argue over the question of belief in God, on the ground of what is or is not the most weighty pragmatic consideration. How that argument is resolved is a mere point of detail. I do not concede. I am expressing my personal feelings. Unlike the Pragmatist, I don't consider for one moment that my personal feelings constitute an argument let alone a 'rational' argument. So far as the existence of God is concerned, there is no case. There is no doubt where the onus of justification lies: it is with the theist, not the atheist.
For the sake of argument, however, let's put aside the last point. Suppose it were true that the question of the existence or non-existence of God is one to be settled by pragmatic considerations. To answer Lucy's question (finally!) there is still a huge disanalogy with the pragmatic justification of induction because (notwithstanding my somewhat tongue-in-cheek case for counter-inductive principles like Sod's Law) there really isn't a meaningful debate about whether or not we should accept induction. The genuine counter-inductivists died out long ago.
(63) Roy asked:
I have trouble understanding what people mean when they use a phrase with the word exception. To me it sounds like a contradiction. So my question has two parts:
A) Is using the term exception ever legitimate?
B) Does the term 'except' usually contradict the general rule that comes before it?
For example, All ice cream should be taxed, except vanilla.
This seems that the quantifier 'all' is false if a member is excluded.
For example, All students passed the final exam except Roy.
Seems to me this means only Roy failed the final exam and the quantifier 'all' makes the sentence false.
Please help me make sense of the term exception. Thanks for your help.
I am going to treat Roy's question as a problem for truth-conditional semantics. Grammarians, who professionally are required to have a little more respect for natural language 'as it is spoken' might respond differently.
The modern wave of truth-conditional semantics was launched by the work of Donald Davidson in the late 60's, beginning with his seminal article 'Truth and Meaning' (1967). Davidson was merely continuing the project started by Frege with his revolutionary Begriffschrift, and continued by the early Wittgenstein, Russell and Tarski.
Davidson reformulated the task for a semantics of natural language based upon Frege's ground-breaking invention of first-order predicate calculus which aimed to satisfy two requirements: (1) to explain how it is that a speaker, using their knowledge of a finite number of words or semantic units, is able to generate a potentially infinite number of meaningful sentences; (2) make explicit the logical entailments between sentences which are only implicit in natural language.
Applied to the notion of 'except', what we need to explain is how it is possible for a speaker to use this term consistently in any number of sentences which they have never used or encountered before, and how they are able to recognize the logical implications of a sentence containing the word 'except'.
The logical analysis represents the speaker's 'implicit knowledge'. What exactly it means to attribute 'implicit knowledge' to a speaker is itself a problem in the philosophy of language, but as it affects truth-conditional semantics generally, I won't develop it here.
Now here comes the crunch: if you can do this, if you can give an analysis which satisfies Davidson's two requirements, then Davidson would say it really doesn't matter too much if the analysis which you offer of the idiom doesn't look at all like something that an ordinary speaker, unversed in the symbolism of first-order predicate calculus would recognize.
This is all rather general. Let's apply this idea to Roy's case.
I can see why Roy thinks that it is odd to say something like, 'All the students passed, except Roy who failed.' If they all passed, then Roy passed. This follows logically from a basic rule of inference which any speaker competent with the term 'all' recognizes. But we just said that Roy failed. He didn't pass. Therefore Roy passed and Roy didn't pass: a logical contradiction.
Or is it?
Here is a first shot at translating the statement 'All the students passed, except Roy', into first-order predicate calculus:
(x)((x is a student & x is not Roy → x passed) & (x is a student & x is Roy → x failed))
'For all x, if x is a student and x isn't Roy, then x passed; if x is a student and x is Roy, then x failed.'
This seems OK. Let's try to apply it to the vanilla example:
(x)((x is ice cream & x is not vanilla → x is taxable) & (x is ice cream and x is vanilla → x is not taxable))
'For all x, if x is non-vanilla ice cream then x is taxable; if x is vanilla ice cream then x is not taxable.'
This is fine so far as it goes but it seems to leave out a rather important aspect of the meaning of 'except', which any competent speaker would recognize. When we say 'all...except...' we are pointing out an exception to a generalization, which otherwise holds. 'All trains into London St Pancras are running normally today, except from Chesterfield.' If the announcer had gone on to list all the trains into London St Pancras bar one or two, then the statement would be regarded as false, or at best, deliberately misleading.
Exceptions are in the minority. This is an important part of what we mean when we use the term 'except', and any logical analysis which fails to recognize this is inadequate. If all the students except for Roy had failed, then you wouldn't say (unless you were being cruel), 'All the students passed except for Roy, Mary, Christopher, Bob, Susan...'.
Closely connected with the use of the term 'except' is the quantifier, 'most'. 'Most of the candidates passed the exam.' Or, 'All the candidates passed, except Roy and Susan.' (We sometimes loosely say, 'Most of the students passed, except Roy and Susan'. But this is confusing when you think about it.)
But how do we evaluate what counts as a 'majority' or 'most'? Is it more than 50%? Can the threshold change between different contexts? 'Most blood supplied for transfusions in the UK is free from contamination.' That better had better be 99.999% or the Minister for Health has a potential scandal on his hands.
Various attempts have been made to give a truth-conditional semantics for 'most', although I don't know if any particular analysis is generally accepted. To allow a vague term into logic itself would have caused great affront to Frege, who saw natural language as necessarily deficient and lacking the precision of logic. The fact is that ordinary speakers exercise refined judgement in deciding exactly when and how to use terms like 'except' and 'most' and this ability is one that is inexplicable in terms of first-order predicate calculus. So much the worse, some would say, for truth-conditional semantics.
(73) Callum asked:
Recently some thoughts came into my head that worried men greatly. The widely accepted (at least I think it is but am hoping isn't) view of Determinism has worried me purely because it means everything I do
was always going to happen (taking away value from my achievements) and therefore makes
criminals not bad (not that I'm thinking of being a criminal).
Are there any credible philosophers or experiments that resist determinism or (if my words are ambiguous) believe we have choice to do otherwise e.g. a person walking in a shop has the possibility of stealing or not stealing. As this would put me at ease.
What is the primary thing that separates humans from animals. I was taught it was the ability to reason?
Because the following response covers issues raised in both of the above questions, I have decided to tackle the both in one reply.
Do we live in a world in which, for us, our lives are mapped out for us in advance? Are all our actions determined by factors that are not, or never can be, of our own making? If so, what part does the faculty of reason play in such a world? And finally, is it really reason that separates humans from other animals, or is there some other faculty that marks us out as different from other creatures? These are just some of the issues that this response to the two above questions will attempt to address.
The central thesis of determinism is that everything that happens is fully determined by things that have preceded it. For such a thesis to be sustainable each and every event must have a cause that would ensure its occurrence. Although some philosophers accept the notion of a 'probabilistic' cause a notion that concedes that it is probable that an effect will follow from a certain occurrence, a theory that argues that some events merely have a probabilistic cause does not qualify as a valid definition of determinism.
Thus we see that the issue of determinism is a moot one, and one that continues to cause much discussion amongst philosophers even today. The success of scientific theories, especially Newton's theory of gravity, led many, especially the Marquis of Laplace, to believe that everything in the universe, including human behaviour. The Newtonian universe, then, was completely deterministic with no room for chance. For Newtonians probability was the consequence of ignorance in a deterministic universe where everything unfolded according to the laws of nature, and/or of God (see Quantum, by Manjit Kumar, p.218). Whilst there were those who resisted this approach, in general, it remained the standard assumption of science until the early part of the 20th century. One of the earliest indications that this approach would have to be abandoned came with the introduction of quantum mechanics by the German physicist Werner Heisenberg. At the heart of this approach was the 'Uncertainty Principle' where Heisenberg showed that the electron is a particle but a particle that can also be described in terms of waves. The uncertainty around which the theory is built is that whilst we can know the path an electro takes as it moves through a space, or we can know where it is at a given moment, we cannot know both (see A Short History of Nearly Everything, by Bill Bryson, p.158). In essence what this means is that, in practice, we can never predict ('determine') where the electron will be at any given moment, we can only say where it probably will be.
Stephen Hawking tells us that the uncertainty principle had profound implications for the way we see our world. To begin with, it marked the end of Laplace's concept of a deterministic universe, for we cannot predict the future with any certainty if we cannot measure the present state of the universe to ant precise degree (see A Brief History of Time, p.57). In short, quantum physics introduces an unavoidable or randomness in science, in the workings of the universe, and in human behaviour. According to Hawking, each event lends itself to up to 30 probabilities (ibid).
Now, while there is no doubt that reason is a faculty that plays its part in allowing us to make this transition, it can be argued that we cannot say for definite that reason is unique to humans alone, for other animals also display, albeit to a lesser degree, the ability to reason things out for themselves as any dog lover will testify. If not specifically reason alone then, what is it that is separates us humans from our animal cousins. For the Italian philosopher Giambattista Vico it is the faculties of imagination and memory that lifts us from the ordinary to the extraordinary. Indeed, so strongly does Vico feel on this issue that he argues that since all ideas, concepts, ideologies and worldviews have their genesis in human imagination that, together with memory, it is a faculty that should be developed in the young before they are exposed to the discipline of philosophy. For Vico, to educate adolescents in philosophy before they had been grounded in faculties of imagination and memory is to engender in them a sense of oddity and arrogance that manifests itself in adulthood and leaves them unfit for the social intercourse (see On the Study Methods of Our Time, p.13).
There are two related points with which I would like to finish. The first is that while reason may play its part in allowing you to fulfil your dream, the source or genesis of the dream is in your imagination. The second is that it should be kept in mind that the only time there is the present. The past is gone, the future is yet to come, the present is all there ever is. As St Augustine says, the past is really thinking, in the present, of things that have already happened, and the future is the expectation, in the present, of things that may happen. And it is in the present that our choices are made. Notwithstanding what has gone before, we have within our power the capability of changing that which heretofore has appeared to be our destiny. By drawing on our imagination, our memory, and our power of reason, we too, like the electron, can make that quantum leap that could not have been predicted or predetermined.
(77) Ronny asked:
Human Test Tubes?
If this website is anything to go by depression appears to influence a lot of people into looking to philosophy to provide some answers to their issues with life. It appears I am one of those people although I am not naive enough to expect a definitive answer to any of my questions. I simply feel the need to express a thought that has dogged me since being offered medication for my depression.
My depression was explained to me, when initially diagnosed, as being due to low levels of certain chemicals within my body and medication would go some way to help correct this imbalance. Coming from a medical background up to graduate level, I was well aware of the complexities of human physiology. However, having had depression explained to me in such a manner I began to question whether everything we are as human beings is not a result of a series of complex chemical reactions? Light passes into my eye where a chemical reaction converts this to a signal passed to my brain where further chemical reactions occur and I am present with an image. Sometimes the images we perceive can produce what we describe as an 'emotion'. Could emotions therefore be seen as the end point of a chemical cascade? Are 'feelings' also end points of chemical processes? I hear a sound which is converted, via a mechanism within the ear, to a chemical reaction to produce electrical signals within the brain. Further chemical reactions branch away from this and the end point can be a stimulation of further physiology and a 'feeling' is produced. Does repetition reinforce a certain chemical pathway so that we develop the same 'feeling' or 'emotion' to the same stimulus? Is that how we come to 'like' or 'dislike' something?
These questions made me wonder whether it is ever truly possible to therefore control 'feelings' or 'emotions'? Once that chemical cascade starts can we influence it? Then again, while writing this I am having 'thoughts' that I feel I am controlling and if I expand my premise to the process of 'thinking' as being a chemical process occurring within the brain, am I not influencing these chemical reactions?
Once again, I don't feel naive enough to think I am the only person ever to have considered whether the body is not one large test tube full of complex chemical reactions with mind numbing interactions that will never be truly understood.
However, what do we become if we view ourselves in this way? Is our feeling of self or the belief that we make our own decisions in the way we interact with the world the result of a series of chemical processes?
The first thing I want to say to Roy is that I take the idea that depression and philosophy go together very seriously indeed.
I remember being told, many years ago, that if I continued with philosophy I would end up 'looking for the shortest rope'. That was by my uncle Jack. At the time, I thought Jack was probably wise enough to know that his own mental constitution wasn't suited to pondering the meaning of life. I can see his worried face even now. But I was different. I could handle it. I'd peeked into the abyss and it hadn't fazed me.
Then I recall that two of the lecturers who taught me when I was an undergraduate subsequently committed suicide. Maybe they thought they could handle seeing into the abyss, but they were wrong. But that's just idle speculation, innit?
Actually, I rather like looking into the abyss. When I cast my eyes around this dingy world, the tawdry sideshows that human beings call 'culture', the abyss is the only thing with any real depth. Anxiety is the only real human emotion. (I think Freud said that.) But philosophy isn't just about plumbing the dizzy depths. It's about remembering and focusing. About being present. It can sometimes be a pleasurable activity (especially if you have a taste for Schadenfreude) but it's not something you do for pleasure.
So is Ronny right, that 'depression appears to influence a lot of people into looking to philosophy to provide some answers to their issues with life'? or did my Uncle Jack see deeper into the truth about these things? And what the hell has any of this got to do with taking pills?
My chemical of choice is alcohol. Problem is, for medical reasons (chronic sarcoidosis, or maybe Sjogren's syndrome the doctors don't seem to know which) I can't drink a single drop. I get a super-hangover that lasts for days. You know that feeling, when you just need a drink? I'm talking about someone who isn't in any way addicted to alcohol. I'd settle for one bottle of beer a week. I can't even have that without causing myself a lot more pain than pleasure.
At least I still have my coffee. I've been told it's bad for my condition, but I'm not aware of any particularly adverse effects. It helps me concentrate. (What do they know, anyway?)
They also say you shouldn't drink alcohol if you have a tendency towards depression. At any rate, you shouldn't drink alone. But social drinking is the best cure I can think of. If alcohol had never existed, the history of Western Philosophy would have been entirely different. Or maybe it wouldn't have happened at all. Read Plato's Symposium, if you don't believe me.
Getting back to pills. Ever since the first 'magic bullet' (Salversan, Dr Ehrlich's 'miraculous' cure for syphilis), an increasingly part of the chemicals industry has been dedicated to discovering new ever more potent formulations to add to the human test tube (nice image). Psychiatric disorders are exactly on a par with physical illnesses and disorders from the empirical standpoint. If it works with sufficiently benign side effects, that's all you want to know.
From this perspective, it's really a red herring to consider whether depressive people are that way because of a chemical imbalance. Even if their depression wasn't caused by a chemical imbalance (we'll get to what 'cause' means in a minute) a chemical cure can still work just as well. To repeat: we're only concerned with 'what works'.
I'm a good materialist, that is to say, I accept the minimal commitment for being a materialist, that mental events are supervenient on physical events. Anything else is up for grabs (a huge topic in the philosophy of mind which I don't what to get into now). Any thought, any feeling, any emotion is reflected in chemical or electro-chemical changes in my body. The direction of causation is the hard bit to figure out, but Ronny has half-seen this ('if I expand my premise to the process of 'thinking' as being a chemical process occurring within the brain, am I not influencing these chemical reactions?').
The bottom line is that you can interact with someone as a person, that means communicating, one person to another (Freud's 'talking cure'); or you can interact with them as a test tube. And that works too, sometimes. Some would argue, it works a lot better, certainly a lot faster.
This is all very circuitous (I'm sorry for that) but you'll see where this is going in a minute.
The other week, one of my old Mac laptops (a Powerbook 1400) died. Instead of starting up in the normal way with the 'happy Mac' logo, I got a picture of a floppy disk with a flashing question mark, then a black screen. I knew the hard drive was ancient and had probably had it. But I wasn't giving up. So I gave the laptop a sharp slap just to the left of the touchpad, where the hard drive is located. This time, the laptop started up, and has been working fine ever since.
We do this with people too. Sometimes, a sharp slap is just what a person needs. But doctors aren't allowed to do this, so they give a chemical slap instead.
What I'm working up to say is that this whole way of thinking about people and their mental trials and tribulations is totally wrong. To see that it is wrong, you have to get away from boneheaded empiricism and the idea that all that matters is that you 'feel OK' again. Freud understood. He saw his aim as transforming distressing psychological illness into 'generalized unhappiness'. When you do that, you have become free, your actions are your own rather than merely effects of your neurosis.
Freud said that in order to write, he needed to be in a mood of mild depression. The fact is, all genuinely creative work is painful. Gaiety and joy are wonderful things, but they're not ultimately real. At best, they are refreshing interludes that help strengthen our resolve, and they come as gifts. There's nothing more shallow or annoying than permanently joyful people.
So get away from the idea that all you need is to 'feel better'. There are other things you need, perhaps need more. (Perhaps philosophy is one of those things; or maybe psychotherapy at least you'd have one real human relationship.) Accept the pain, adapt yourself to it, work with it. If you can find some depth in your life, whether from philosophy or some other activity, that is of far greater value.
(83) Derrick asked:
With the rapid implementation of advanced automation, robotics and soon nanotechnologies will there still be a place for the human masses?
We have long since passed the point of sustainability, we pollute our ever shrinking supply of fresh water, deforest at accelerating rates and erode our agricultural land and every human disaster is serviced by emergency aid and the result is further breeding to add to the rescue mission next time.
For how long will the have continue to support the have not, will there still be a place for humanity's masses in the coming ages or are we in the process of eliminating ourselves?
It's unusual for me to be answering another question so quickly after posting a tentative answer (on human test tubes), but Ronny's question on Monday has put me in a mood which I'm having some difficulty shaking off.
In my answer to Ronny I said that I 'rather like looking into the abyss'. That is such a gob-smacking thing to say let alone mean. Did I mean it? Or was I just showing off? I feel as if I meant it. My mood is quite buoyant.
How much can I do without? Work is piling up on my desk today, but I don't sense any strong ethical impulse to get on with it. Diogenes' question (remember, Diogenes who lived in barrel?) haunts me. I don't need any of this.
OK, well that's enough about me. What about the human race? What do we need? How much can we do without? Why do we need the masses?
Obviously, the world economy still requires a massive resource cheap labour but (as Marx foresaw) advances in technology will eventually make manual labour redundant. Imagine workforce of obedient robots who need nothing apart from a few drops of oil and a regular recharge. Well, that's pretty obvious.
What are the 'masses'? Jose Ortega Y Gasset gives a pretty potent definition in his book Revolt of the Masses (1929). The main point to note is that one shouldn't make the mistake of identifying the masses with the 'have nots'. Ortega's typical 'mass man' is the self-satisfied bourgeois.
Get rid of them all, is the answer. Get rid of the have nots, for sure. But also get rid of the bourgeoisie. Who else? Anyone with an IQ under (hmmm) 135. That's a bit generous, I know; not enough to get into Mensa, but that's OK because we're eliminating Mensa members anyway (too smug and self-satisfied by half).
To be serious for one moment (as I'm trying to be, because it's a serious question): Here's a useful thought experiment. Imagine that human beings are the only intelligent life in the universe. I know that we're repeatedly told that the probability of alien intelligence is overwhelming despite the complete lack of any concrete evidence but it isn't a fact, it isn't something we know.
So, imagine we're all alone. Does that make you feel more important? Does it make you any less willing to let a few billions die? Not me. What about the survival of the human race. Surely, one would care about that. But why? Survive, for what purpose?
I don't know. That's the honest truth. I just don't know.
I can't think in such general terms. When I try, I lose all my bearings. There are persons whose survival, and happiness, I very much care about apart from my own survival and well being. Instead of starting at the 'big end' (the entire human race) and eliminating the ones whose survival doesn't seem to matter, maybe the thing to do is start at the other end, the small end, by writing a list of all those I do care about, all those who I would allow into the Ark, so to speak.
As each human being comes into focus, looks me in the eye, I feel as if I would have no choice but to let them in.
The solution to 'the world's problems' has been a topic of debate for a long while, certainly since Malthus wrote his Essay on the Principle of Population. Undoubtedly, technology must play an important part. But, as Derrick has so clearly seen, if we rely only on science and technology then there may very well come a time when human beings, or at any rate a large proportion of the human race, become simply redundant.
This isn't the place for a mealy-mouthed lecture on ethics. I parade my moral virtue for no man. So I will simply say this. A heap of sand is made of individual grains. The masses are made of individual persons, and each person has a face. Whatever your ethical or political views may be, that is one fact which you should not allow yourself to forget.
Ronny there are several questions here some of them very complex. You are right in thinking that any sort of emotional or mental disturbance can turn people towards philosophy, often quite inappropriately since they imagine that philosophy answers questions about the meaning of life. Philosophy does deal with questions about the meaning of life but not in the way that people might imagine.
So just as someone with a broken leg would do better by seeking medical attention rather than seeking out a philosopher, so people with emotional or mental problems, who may feel that life is meaningless, should not seek out a philosopher to convince them that this isn't true.
In general philosophers who think that life is meaningless don't feel that life is meaningless. These philosophers may be quite cheerful and happy. On the other hand people suffering from depression may feel that life is meaningless although they may not have real reasons to think that this is true.
Then you wonder if all our behaviour can be reduced to brain chemistry and brain processes. Certainly it is true that brain chemistry is involved in everything we do but in general things cannot to reduced to brain chemistry. To give you a crude example of this consider the sentence 'He stole the money from the cash box'. This could never be reduced to just his brain chemistry because 'stealing' is a social construct which presupposes a society with property rights and property laws etc. and these complexities cannot be reduced to anyone's brain chemistry.
However this also leads us to a real philosophical question of determinism vs free will but this is too complex a question to answer in an email. What is true is that we recognise that people suffering from mental disturbance may not be as able to control their thoughts and feelings as other people. However where we draw these lines is a difficult decision and the fact that we sometimes excuse people from responsibility for their thoughts and feeling doesn't imply that we must always excuse everyone.
You are right to think that the body is one large test tube. Humans are physical beings completely made of chemicals. However the implications of that are not as clear as you might think. So for example we could say 'Chemicals are completely lacking in intelligence therefore humans must be completely lacking in intelligence' but that isn't true.
Part of your problem is the old mind-body problem, and part of it is the error of reductionism.
The mind-body problem goes back at least as far as Descartes, who was both a devout Catholic and very keen on the new science arising from Copernicus and Galileo. The church was very opposed to the new science, and I believe the reason that Descartes divided reality into two substances, which he called thought and extension (mind and matter today), was that thought belonged to religion and extension to science; since thought and extension could not interact, there could then be no quarrel between religion and science. The problem that then arose was how mind and body could interact, as with the mind willing muscles to move and bodily injury causing pain. This is the mind-body problem.
The error of reductionism is a 'nothing but' error, the error of explaining the properties of higher level systems exclusively in terms of lower level systems. This is an error because structures of systems into higher level systems have emergent properties that cannot be explained by their subsystems alone. Two major examples are the emergence of life out of chemical systems (molecules) and the emergence of mind out of brain. To claim that life is nothing but chemistry, or that mind is nothing but brain activity, is to commit this error. Another way of looking at this is in terms of the whole being greater than the sum of its parts: emergent properties are the excess of the whole over the sum of the parts, and it is an error to try to explain the excess exclusively in terms of those parts.
It is a fact that there can be causal interactions between system levels. For example, poisons can destroy life, pharmaceuticals can enhance it, and living organisms can make chemical changes to what they breathe and eat; this is two way causation between living organisms and chemistry. And this brings us to your problem. Drugs can reduce depression; drugs are chemicals; depression is mental; so chemicals can causally influence mind. But this does not mean that depression or its absence are nothing but these chemicals. And equally so for most, perhaps all, other mental phenomena. So is our feeling of self or the belief that we make our own decisions in the way we interact with the world the result of a series of chemical processes? No, it is partly so, but not wholly so.
Ronny, you've stumbled across a version of the 'problem of free will' there. As you say, you're not the only person to have considered that question; philosophers have been mulling it over for a very long time, and it's safe to say that no consensus has yet been reached!
Here are three things you might think (though there are other options available). First, there are some philosophers (e.g. Galen Strawson) sometimes called 'illusionists' who think, for roughly the reasons you give, that we cannot have free will. Having free will would require that we are somehow 'originating causes' of our actions (including mental actions like deciding); our decisions etc. would somehow have to come out of nowhere and not be caused by chemical reactions or neurons firing or whatever.
Second, there are some philosophers (in particular P. F. Strawson) who think that, while it may indeed be true in some sense that our decisions, emotions, etc. are 'just' a matter of chemical reactions etc., this is not a way of conceiving of ourselves that we should, or even can, adopt. In order to make sense of our lives and in particular, our relationships with other people and our practices of praising and blaming people for what they do we cannot think of our actions as 'just' a matter of chemical reactions or whatever. So in a sense Strawson agrees with the illusionists; but while the illusionists think that, since clearly we *are* in fact just made of physical stuff and what we do is a matter of physical processes (e.g. chemical reactions), freedom of the will is an illusion, Strawson thinks that *no* argument could show that our self-conception as moral agents is mistaken. Even if we were somehow psychologically capable of giving up that self-conception, we simply wouldn't be able to understand ourselves and our fellow human beings if we did.
Third, there are some philosophers (e.g. Daniel Dennett) who think that there just isn't really a problem here at all. We just have different ways of describing ourselves, and they are perfectly compatible with one another. So, for example, think of a computer chess programme. On one level, when you computer 'plays chess' with you,there are just a bunch of electronic signals zipping around the computer, which result in different patterns of pixels on the screen. But if we just describe the programme in this way, we'll be missing out on important facts: that the computer has just taken your queen, say. And that the computer has just taken your queen is just as much a fact about the world as are all the complicated facts about electronic signals, circuitry, etc., even though, in some sense, the former is nothing more than the latter there's nothing mysterious going on here. Similarly, even if, say, my decisions and emotional reactions are, on one level, just a bunch of chemical reactions, that doesn't make it illegitimate to say that I really am experiencing emotions or making decisions. Nor does it make it illegitimate to say that I have *control* over those decisions or emotions, any more than (to use an example of Dennett's) it is illegitimate to say that the thermostat controls the temperature of your house.
So in answer to your questions:
'Is our feeling of self or the belief that we make our own decisions in the way we interact with the world the result of a series of chemical processes?' Yes, probably.
'What do we become if we view ourselves in this way?' Well, the issue is whether, given the answer to the first question, we are obliged to view ourselves in this way at the expense of another way (i.e. as moral and rational beings who make decisions, experience emotions, etc.). As will be clear by now, philosophers disagree on that question!
British Philosophical Association
(87) Bernice asked:
The German philosopher Immanuel Kant is said to have merged both the basic contradictory ideas of the rationalists (Plato, St. Augustine, Descartes, etc) and empiricists (Aristotle, Locke, Hume, etc.). He said, 'Thoughts (concepts) without content (sense data) are empty; intuitions (of sensations) without conceptions blind.'
What does Kant mean? In that way did Kant merge or synthesize rationalism and empiricism through this saying? Explain.
Bernice, the first thing that should be said about Kant is that he is reputed to be difficult to understand. It is for this reason that I apologize in advance for presenting a somewhat complicated response to your very interesting question. I do so, however, in the belief that my answer to your question, while perhaps straying somewhat beyond the bounds of the question itself, may offer a more comprehensive appraisal of how Kant can be seen to forge a synthesis between rationalism and empiricism.
Empiricist philosophy argues that there is a connection between the outside world and the human brain, a connection that is made through sense impressions and their impact on the brain: an impact which is scientifically investigatable and understandable. According to the Empiricist view, human knowledge is something 'out there': something that is external to the mind. Human beings, say Empiricists, are not entombed within their minds, for them 'mind' and 'world' are not inseparable. In his An Essay Concerning Human Understanding (1690), John Locke declared that the mind was a tabula rasa a blank slate. Human beings, he maintained, are born with nothing other than the capacity for experience through the senses. The knowledge we acquire is not due to any innate power to reason, but to the accumulation and organization of experience. David Hume (1711-1776), one of Britain's most eminent Empiricists, followed Locke's argument. 'We know the mind', he said, 'only as we know matter: by perception'. Hume maintained that the mind is not a substance an organ of ideas, but an abstract name for a series of ideas, memories, and feelings, which all have their source in experience.
Immanuel Kant (1724-1804), was impressed with the Empiricist argument that experience is the basis of all knowledge. However, he was unhappy with Hume's skeptical conclusion. It was while reading Hume that Kant 'awoke from his dogmatic slumbers' and realized how he could answer the destructive skepticism of Hume, which Kant believed had threatened to destroy metaphysics. While Kant agreed with Locke and Hume that there are no such things as innate ideas, he could not accept that all knowledge begins with experience. 'Though all our knowledge begins with experience', he said, 'it by no means follows that all arises out of experience'.
The notion of idea advanced by Locke in his An Enquiry concerning Human Understanding is central to Hume's epistemology: Hume's concern was how do we know anything for certain? As mentioned above, Hume's view was that all knowledge derives from experience. Experience, he said, consists of perceptions, impressions, and ideas. Impressions differ from ideas in intensity. That is, by 'degrees of force and vivacity', those impressions which enter with the most degree of intensity are impressions. Ideas are more feeble impressions; and every simple idea has a simple impression. However, it is also possible to have complex ideas. These are derived from impressions by way of simple ideas, but they do not necessarily conform to an impression. For example, I can have the idea of a Sphinx by combining an idea of a woman with an idea of a lion. I have put together my impression of a human with the impression of a lion. All experience, said Hume, is a sequence of perceptions. All notions, such as cause and effect, bodies and things, even the idea of God, are but mere suppositions: amalgams of impressions.
In 1781, in response to the claims of empiricism, Kant published his famous Critique of Pure reason; his ambition was to show pure reason's possibility, and to exalt it above the impure knowledge which comes through the channels of sense. So when Kant states that it by no means follows that all knowledge arises out of experience he means that pure reason is knowledge that does not come through sensory perceptions: knowledge that is independent of all sensory experience, and knowledge which belongs to us by the inherent nature and structure of the mind. Knowledge, said Kant, is not all derived from the senses, as Hume believed he had shown, but is derived from both sense and reason. Rationalists, such as Descartes, believed that the basis of all knowledge lay in the mind; Empiricists, such as Locke and Hume, held that all knowledge of the world proceeded from the senses. Kant believed that both sense and reason are involved in our conception of the world. According to empiricism, habit arises as a consequence of knowledge which happens after, or succeeds, contact with sensation: it is a posteriori. Rationalism proposes that knowledge is analytic: it attempts to anticipate experience by constructing systems of logical deduction from basic axioms. This results in the possibility of a priori ideas of reason. By considering both Empiricism and Rationalism, Kant created a sophisticated model of knowledge which overcame the simplistic notion of the subject either anticipating or reacting to experience.
Hume maintained that it was only force of habit that made us see the causal connection behind all natural processes. Kant refutes this argument; the law of causality, he maintained, is eternal and absolute: it is an attribute of reason. Human reason, he said, perceives everything that happens as a matter of cause and effect, that is, Kant's philosophy states that the law of causality is inherent in the human mind. He agreed with Hume that we cannot know with certainty what the world is in itself. We can only know what the world is like 'for me'. We can only know things in themselves (noumena); we can only know them as they appear to us (phenomena). However, before we experience 'things' we can know how they will be perceived by the human mind. We know them a priori.
The mind, said Kant, contains modes of perception that contribute to our understanding of the world. These modes of perception are space and time. Space and time, for Kant, are not concepts, but forms of intuition. Everything we see, hear, touch, smell, feel etc., that is everything that happens in the phenomenal world, occurs in space and time. But we do not know that space and time is part of the phenomenal world; all we know is that space and time are part of the way which we human beings perceive our world. Space and Time, said Kant, are irremovable spectacles through which we view the world; they are a priori forms of intuition, that is, they shape our sensory experience on the way to being processed into thought. Space and time are inherent modes of perception that determine the way we think. It cannot be said that time and space exist in things themselves, they condition a consciousness by which we, as humans, perceive and conceive the phenomenal world. Space and time belong to the human condition; they are first and foremost modes of perception, not attributes of the physical world. The mind, said Kant, is not a tabula rasa which absorbs sensations from the outside world. Kant held that it is not only the mind that conforms to things: things also conform to the mind. In the preface to the second edition of his Critique of Pure Reason, Kant called this the Copernican Revolution in the problem of human knowledge. That is, it was just as innovative and radically different from earlier thinking as when Copernicus claimed that the earth revolved around the sun.
As shown above, the mind, for Kant, receives data of the phenomenal world through sensory perceptions. However, in order to understand this information these sensory perceptions must be processed by certain conditions inherent in the human mind. As well as the 'intuitions' space and time, Kant lists ten categories which are meant to define every possible form of predication. These concepts (or categories) are reorganized to consist of four types: quantity, quality, relation, and modality. In short, everything we, as human beings, experience we can be certain will be imposed within the a priori framework of space and time (intuitions), and subject to the law of causality. These conditions, says Kant, operate as a formal apparatus to bind together a priori judgements. These functions are the pure concepts of synthesis which belong to the understanding a priori. That is, before we have experience anything from the outside world, the mind already possesses the intuitions, space and time, and the law of cause and effect. However, these intuitions and categories, without sense data, are empty, and sensations without the intuitions space and time and cause and effect are blind.
Thus we come to realize that in Kant's view there are two sets of elements that contribute to our understanding of the world. The first set involves external conditions which we cannot know before we experience them through the sense. The second set involves the conditions inherent in the mind. Empiricism argues that the mind is but a 'passive wax' which is pummeled and shaped by sensory impressions. David Hume had reduced the mind to little more that a sponge which absorbed impressions and formulated complex ideas, not by virtue of any innate power, but by force of repetition and habit. Kant refused to accept such a skeptical approach. Whilst accepting that our knowledge of the world enters the mind through sensory experience, he rejected the notion that all arises out of these experiences. If this is the case, the question arises, whence comes order? For Kant, the world is ordered, not in itself, but because the mind already contains certain innate powers which impose an order on the data received through sensory impressions. The human mind, says Kant, assimilates these impressions and makes judgements on these perceptions by virtue of the power inherent in the mind. These powers allow the mind to make sense of, and function in, the phenomenal world. Access to this world, then, says Kant, is only that which our intellectual and sensory powers, operating in tandem, permit. In other words, our capacity to understand the world in which we live depends on a synthesis between the intuitions space and time and the law of cause and effect, and empirical experience.
(88) Bernice asked:
The German philosopher Immanuel Kant is said to have merged both the basic contradictory ideas of the rationalists (Plato, St. Augustine, Descartes, etc) and empiricists (Aristotle, Locke, Hume, etc.). He said, 'Thoughts (concepts) without content (sense data) are empty; intuitions (of sensations) without conceptions blind.
What does Kant mean? In that way did Kant merge or synthesize rationalism and empiricism through this saying? Explain.
What Kant means is that only synthetic a-priori judgements can provide knowledge. A posteriori judgements associated with empiricism and a-priori judgements associated with metaphysics cannot provide certainty or what Kant called apodictic certainty.
If empiricism is the view that knowledge is derived from sensuous experience then it is precarious. What is experienced and taken as being knowledge today could change tomorrow. It could change as it is acquired purely from experience. We can have no guarantee from experience that experience will continue to provide us with the same knowledge it has in the past. As arch-empiricist David Hume observed, that I have experienced the sun rising hundreds of times before provides no certain guarantee or law that it will rise tomorrow. Or perhaps that there will even be a tomorrow......Further, experience doesn't provide us with concepts such as number or quantity. I can experience one sheep and another sheep but nowhere in sensuous experience, do I experience TWO as in the statement, I perceive 'two sheep. Empiricism provides no conceptual guarantee or law-like certainty that what we experience is true and will continue as before.
Rationalism or metaphysics has concerned itself with deduction from concepts. Thus the concept of God entailed a being fully possessing reality in all logical ways. For example, the subject God necessarily contains the predicate 'existence' God as so defined cannot, not exist. From his definition, he necessarily exists a-priori. Such predicates are contained in the subject. It is a matter of logical and analytical deduction. Such concepts 'mapped' out existence and in thinking them by using the pure light of reason, philosophers were supposedly thinking reality itself. In the 1787 Introduction to the Critique of Pure Reason [which I recommend you read to further answer your question] Kant attacks this metaphysical approach of using pure reason to acquire knowledge as not being successful in acquiring any. Instead each metaphysical philosopher builds a system which is then disputed by other metaphysical philosophers without final settlement ad infinitum. In other words, this approach is like completing metaphysical crossword puzzles without definitive advance.
So for Kant, both empiricism and rationalism fail to provide epistemic certainty. Kant's philosophical project is to devise a system which does this. In Critical Idealism, he proposed had found just this.
When an intuition [sensation] is presented to the senses, it is not cognised in a raw manner as advocated by empiricism. Firstly, it is presented in Space and Time. Further, it is synthesised with 'Transcendental Categories', termed 'Transcendental' in that they not derived form experience but transcend experience being inherent to human consciousness. This is the bit borrowed from metaphysics in that the Categories take the place of a-priori 'concepts' although they are not products of thought-they are the necessary conditions which allow the possibility of thought. These are Quantity [Unity, Plurality, Totality]; Quality [Reality, Negation, Limitation]; Relation [Substance, Causality, Interaction]; Modality [Possibility-Impossibility, Existence-Non-Existence, Necessity-Contingency]. Think of sealing wax and a stamp. Intuitions are the wax and the stamp the Categories. When synthesised with the wax, the stamp provides a definite, intelligible sign or meaning. This is what Kant calls synthetic a-priori judgement. The 'a-priori' aspect of the Categories is synthesised with the intuitions of experience. The product of the synthesis is certain knowledge or the world of objects in space and time we perceive around us, including others.
When, for example, I perceive a Tree, the categories of Quantity, Quality, Relation and Modality have been synthesised with the intuition. I see a single tree [Unity, totality]; it is and I can feel the intensive texture of its bark and leaves [Reality], it is determinate [Limitation] and all its parts are together in one space [substance]. It might also display movement when the wind blows [Space and Time].
'I' accompany all these synthetic a-priori judgements in that I consciously perceive them [called the original synthetic unity of apperception]. That is for example, like the act of eating a sandwich. In the act of eating [analogous to the production of synthetic a-priori judgements], I appreciate all the different ingredients in the sandwich-all at once [analogous to apperception which accompanies those synthetic a-priori judgements].
In conclusion, Transcendental Categories without content [intuitions, sensations] are empty just as intuitions without Transcendental Categories are blind.
Hope this helps Bernice.
(94) Wesley asked:
Has anyone written on the concept of a Post-Existential life?
I have entered the final years of my life. The life I am living now can be changed only fractionally by decisions and actions I make now. That is, it is as if all my previous decisions have painted me into this corner of this room in this house, here.
If authentic acts/decisions are those in accordance with one's freedom, my Authenticism is absolutely limited by the limits of my freedom to act/decide, which have become limited by all previous decisions and by Existence itself. My actions have brought me to where I am. I have decided on a course of moral and social Being. I have made decisions that now limit my health. All these limit my Freedoms and thus my Choices. I can no longer act in such ways that bring further Freedoms of Decision. All my existential life has led to this painted corner.
Granted, I have the wide freedom limited by health and financial circumstances to act in opposition to all prior decisions, 'Out of Valid Character' so to speak. To be wicked, criminal, to defile what I have held dear, to do the opposite of what I have chosen as the correct response in previous choices presented by my Freedom. But to do so seems Inauthentic in the extreme. And even so, my opportunity to act Out of Character is highly limited.
Thus, my life could be said to be Inauthentic in that I have little freedom to act, but can this be? Does one live an Authentic Life only to face death necessarily in Inauthenticity?
Rather, I see this as Authenticity leading to infinitely smaller and smaller Freedoms of Action the closer I approach and enter death. Thus, Authenticism leads to lesser and lesser, fading, then extinguished Freedom of Action. Neither Authenticism nor Inauthenticism. But even this seems unacceptable.
I would appreciate comments. Thank you.
Well I have never been a fan of existentialism since its moral blindness has always seemed to me to be immoral. Sartre who first devised this idea of an authentic/inauthentic existence became a marxist although not of the dim Stalin worshiping sort.
However it is true that in existentialist terms that the choices you make limit your future choices. If your choices are authentic or sincere then you should have no regrets about this. Age does limit your choices but there is no reason to suppose that that makes your life inauthentic in itself. You can only choose from the choices that are available to you.
Existentialism never pretended that human choices are infinite. You can only choose to fly if you have wings.
I understand, Wesley, where this is coming from. However, I will argue that if you accept the truth in existentialism, then there can be no such thing as a 'post-existential' life.
One needs to draw a distinction, however, between 'being an existentialist' (which as it happens I am not) and 'accepting the truth in existentialism' (which I do). You'll see the reason for this distinction in a minute.
Last week, as an exercise, I gave myself a mock interview. If one is being po-faced about this, one could say that it was part of an ongoing project of seeking to 'know thyself' as Socrates advocated. The serious point is that this is knowledge which one is perpetually on the way towards and never finally achieves. Indeed, to think you had achieved it, and that there was nothing more to know would be an act of bad faith.
Of course, the whole thing was rigged. This was intended for an audience. Even so, it was surprising to me, some of the answers that slipped out. (Maybe it had something with playing Hendrix's Electric Ladyland album in the background as I was writing which has a way, as great works of art do, of getting under the skin, loosening and unravelling the congealed layers of the psyche. Hendrix once said he wanted to write music that had the power to heal; he came closer to this than most of his generation.)
One question which I posed myself is whether or not I am a stoic. I said, somewhat cagily, 'I wouldn't describe myself' as a stoic. What I meant was, I'm not of the breed of Epictetus or Marcus Aurelius, or those who follow in their footsteps. I don't believe that all that suffices for a life of ethical virtue is 'knowledge of the Good' or some such Platonic notion.
And yet, on reflection, I realize that I accept the truth in stoicism. That is to say, I believe that there is something to know, which provides an objective basis or rationale for ethical conduct; only that 'something' falls short of what Socrates or Plato aimed for. (One of my ex-students reminded me that I once actually told him I was a stoic, which is interesting as I have no recollection of saying this.)
Iris Murdoch in her brilliant short monograph Sovereignty of Good (1970) makes a big play of the shortcomings of existentialist ethics, and the need to rediscover a Platonic notion of an objectively existing Good. I have no quarrel with that. What I'm saying is that fully responsible or 'authentic' action requires that we accept the heavy burden of responsibility for the values we choose to live by. You cannot distil those values from knowledge of the Good. There is nothing to know other than what we can discover through patient, factual investigation (here I am with Hume and the early Wittgenstein). But to be willing to conduct such an investigation when faced with bewildering ethical choices and dilemmas is a responsibility, and to a large extent an ethical responsibility.
'If it doesn't impact on me then why should I care,' is the ultimate question posed to ethics. A true existentialist would say that I choose to care and take responsibility for that choice. I don't think, realistically, that this is a choice. (Hence, I am not an existentialist.) It is about being a person, or being human: to look at the face of the other and never be moved, or successfully resist any temptation to be moved, is to put oneself outside human life altogether. I won't try to give a metaphysical spin on this. I am stating this as if it were a plain fact.
Now to the question: what happens to this 'burden of responsibility for the values we choose to live by' as one approaches death? All the big choices have been made, and one has accepted, taken responsibility, for those choices. I sometimes wonder what my life would have been like if I had not 'chosen relationship'. But I did, and I live with the consequences of that choice. I do sometimes feel, as Wesley does, a keen sense of being 'painted into a corner'. As a widower, with three daughters who still need a parent's practical and moral support, I don't have the range of choices I would have otherwise have had.
But this picture is completely wrong, if one interprets it as implying that there are no 'big' choices left, only little or insignificant ones. Of course, one can just walk over the wet paint and make a mess of things. I fully appreciate why Wesley would not consider that as a valid option. However, to stay in one's narrow corner is an existential choice. Maybe you've made some bad decisions in your life and now you're living with the painful consequences. You can to stay and face the music, or flee. And you have chosen to stay.
But I am going to assume that this is not the case for you. By and large, you are reasonably happy about the decisions that you have made.
The first point to make is a purely practical one: we don't know, for sure, what lies ahead for us. Not everyone gets to enjoy a tranquil old age. Tragedies and disasters have a way of disrupting one's cosy retirement plans. I won't enumerate all the ways in which this can happen. Imagine that this is 1936 and you are a Jew living in Vienna. Or it is 1945 and you and your family live in the vicinity of Hiroshima.
Or let's move things on a bit and take an extreme case. You are close to death. Physically, you are incapable of any movement apart from blinking in response to questions put to you. And someone asks, 'Do you forgive X for what they did?' And let's suppose, for the sake of this example, that what X did was really unforgivable, monstrous. But you still have that choice. Is it a small choice, or is it possibly one of the biggest choices you have ever made?
Or to strike an even more sombre note: Camus in The Myth of Sisyphus (1942) poses, as a philosophical question, what reasons are there to not commit suicide. There is no time in the length of a human life where that option no longer exists as a potential life choice.
One of the points I make early on in the Pathways Moral Philosophy program is that most of us, most of the time, never face really big ethical decisions. Our courage, for example, may never be fully tested. You might well ask whether one can be an existentialist when you live a life of comfort and ease regardless of your age where there are no scary or momentous choices, only pleasant ones.
In H.G. Wells' brilliant parable The Time Machine, the Eloi live like this. We can only see the Eloi as irresponsible children, unwilling to face the grim reality of their situation easy meat for the Molochs. But how many persons do, in fact, live such a life of irresponsibility? That is, after all, the point about the self-satisfied bourgeoisie. 'You've never had it so good,' as Prime Minister Macmillan said. But that was to a generation who had lived through the Second World War.
The biggest challenge for existentialists, or for those who 'see the truth' in existentialism is how to live when no important ethical choices ever seem to intrude on one's happy existence. I'm not saying that it's necessarily a bad thing that one is happy and contented. Ultimately, we can't choose the external circumstances in which we find ourselves, the events which intrude on our lives. This lack of momentous choices is a problem at any age, not just in old age.
Yet at the same time there is a part of me which wants to rebel in fury at the idea that anyone has the right to be contented. I don't just mean that the world is in a mess, in so many ways, and that you should be striving to the utmost and to the end of your days to do something about it. That's just one way. Equally strenuous and demanding would be the decision go back to college, study philosophy, say. Or, for someone in my situation, to look for another life partner. But to be a bit cynical about this aren't these just so many strategies against boredom? Why this great effort? what difference does it make? You're going to die, anyway. That's the question Camus asks.
Which brings me back to the one thing which I cannot get past. The one indubitable nugget of metaphysical fact: my existence. This is what existentialism is ultimately about. I am not 'some' person. I do not do what 'one' does. The choice and there is always a choice is here for me, now. That is what it means to say that 'I exist', in the sense in which this is an active verb rather than a merely tautological statement.