Is reality a dream?
If not, what differentiates the 'real world' from the 'dream world'?
If so, Life is wrapped in a dream. If that is true, then wouldn't death be wrapped in a dream? Is death just one big dream?
and SFB asked:
What is reality?
and Aaron asked:
What is reality? Could we simply be pawns in a child's computer game?
The key to the answer is the recognition that the concepts "reality" and "dream [world]" refer to two distinctly different modes of experience. By the very nature of these two concepts, they cannot refer to the same thing. Therefore, the simple answer is "No!". Reality cannot be a dream without seriously abusing the meaning of the two words. Poets, of course, are granted license to abuse the language for artistic purposes. But philosophers must take greater care.
We each experience "the slings and arrows of outrageous fortune" in two distinctly different modes. When experiencing life in one mode, we notice that things perceived are constant, persistent, consistent, and coherent. When experiencing life in the other mode, we notice that things perceived are dramatically less constant in form and character, often transient in existence, frequently mutually inconsistent both from thing to thing and across time, and far more frequently quite incoherent. One mode of experience draws the focus of our attention, is amenable to inquiry, and responsive to our reactions. The other mode of experience often drifts uncontrollably past our attention, is rarely subject to inquiry, and is often unresponsive to our reactions. On any scale of measure, the difference between the two modes of experience is dramatic and unmistakable whenever noticed. One of these modes of experience we call the "real word", the other we call the "dream world" (or hallucinations, or illusions).
Most of us spend most of our time experiencing life in the "real world" mode. Episodes spent in the "dream world", while they may seem quite real at the time, always end with a transition back to the "real world" mode of experience. Some people, for reasons as diverse as drugs to organic brain damage, spend more of their time in the "dream world". Some people, again for diverse reasons, lose the ability to notice the distinctly different character of two modes of experience, and are unable to distinguish their "real" experiences from their "dream" experiences.
The bottom line is that life is not a dream. The "real world", unlike the "dream world" possesses an unmistakably greater degree of constancy, consistency, and coherence. In the real world, elephants are huge, grey and don't fly. That remains true across time, and is consistent with all other information we have about the real world mode of experience. In the dream world, pink elephants can buzz around your head, and turn into green mice stomping on the roof of your house. The fact that sometimes a dream appears so real you can't tell, does not alter the fact that you always wake up.
We cannot waken up from reality, therefore it is not a dream. However, we can waken up to reality from a dream. I think I understand where you are coming from, and it is not an unusual question you have asked. Fundamentally, you are asking; What is reality? the blunt answer is, I don't know!! Further, I have not yet come across anyone who does. However, there are several theories to choose from, posited by philosophy, science and religion. Most of these theories are backed by strong logical arguments, although some appear more feasible than others. There is little doubt that we all entertain the notion of a fundamental reality, and most of the world's human population take it for granted that there are things that are real and things that are unreal.
Probably most people who care to give the problem some consideration are willing to accept the materialist views of science, i.e. the universe is in reality solid, size, weight and shape are measurements of a solid reality. Philosophers can be divided broadly into 'materialists' and 'idealists.' Materialists basically hold views sympathetic to science. Reality for idealists is somehow linked to 'mind,' we live in an inner 'subjective' world rather than an outer' objective' world. However, there are several variations within both the notion of materialism and the notion of idealism; hence, we are presented with a choice of several 'world views.' Added to all this is the notion of 'Dualism,' which accepts that the world is both mind stuff and matter stuff. In dualism there is, in some cases, a link with religious views, where mind and body are interpreted as 'soul' and body. We have a real soul in a real body.
Religion in general is perhaps the thinking body least concerned with seeking out reality. Religion, in all its variations, remains spiritual, and establishes reality through 'faith.' God is real and God created a real universe. There is an undeniable pragmatism about the overall religious view, 'what is 'is,' do not question reality, trust in God, or the powers that be, accept the reality we are aware of and get on with living the good life.'
To briefly consider dreams. Probably most people will accept that dreams are real, though the content may be fictitious. The assertion, "I had a dream last night," is true to me and probably can be accepted as true by the person I am addressing, based on his/her own experience with dreams. We, therefore, both understand what is meant by dreaming, and are each well aware of the of the difference between dreaming and its association with sleep, and being consciously awake. The implication in your question suggests a fallacy commonly expressed by those who wish to make a comparison between a solid, material world and the idealist world of mind concepts. It is not the case that if reality is not fundamentally material, then it is somehow a dream world in the mind. The idealist world is a concept of a real world, but differently constructed to the notions of scientists and materialists in general. If our world is an idealist world, then it is a real world, in which we are capable of recognising the difference between being awake and dreaming.
I am no expert on death, for, so far as I know, I have not yet experienced it. Neither have I met anyone to my knowledge who has returned from the dead. However, what little I do know about death indicates that it is a reality and far from being a dream. In fact it is the only outcome of life that we can safely predict. Having said all this, I do keep an open mind on the subject, my years of interest in psychic phenomena keeps me alert to possibilities.
Not wishing to appear flippant, because your question is a very serious one, I would say that reality is what each individual chooses to believe, some explanations seem more acceptable than others, and until philosophy, science or religion produces the real answer, if they ever do, then we will have to go along with the choices open to us. But there is no denying the fact that one of the present theories just may be true, it is a matter of proof.
I think you're confusing something really basic here. Death is not a state of anything or anyone: it is the absence of existence, the non-being in or of any state. You'll have to think about this in the context of language, which treats 'Death' as though it were the opposite of something existent. But of course it's not. Language used in such a way is a means for us to bring to apprehension a state that exists (or which we presume to exist) and then to identify linguistically a non-state.
Perhaps the simplest way to explain this is as follows: if you and I were stranded in the middle of the Sahara on a hot day, and you say: 'I'm thirsty', I might reply 'there's no water here'. The important language element in this sentence of mine is not 'water' but 'here'. It implies that water is known to exist; it just happens not to be available where we are situated. So I'm not making an existential statement about water or non-water. Whereas, if in the same situation you are on your last gasp, about to expire, then I might be in the humanly very distressing situation of having to understand that, at present, you are, but in a few moment, you may no longer be. Then it is appropriate for me to report, 'this man was alive and now he's dead,' to identify a state of being which I knew you to inhabit at some temporal bracket in history. But to extend this kind of articulation to states which are not, never have been and therefore never will be 'dead', is strictly speaking just a game, the game of language (cf. Wittgenstein). It does not refer to anything 'real', it just refers back to us, and that includes to a large extent not just our understanding but our wishes and beliefs.
I expect that from this answer you will readily deduce that your question about dreams is a non-issue for the same language-dependent reasons. Reality is distinctly of the body: it is therefore experienced by every organism in its struggle to live and survive and reproduce. The only organism to which this is a 'problem' is the mind-endowed creature called homo sapiens, whose state-of-being is among many other qualities identifiable by his ability to note a difference between mental and physical features of this reality. We then go ahead from this fairly innocuous problem and hang enormous weights of speculative thinking on it, of which a great deal is again just part of the game of language.
To put this into a neat capsule for you: we tend to lump the concepts 'mental', 'psychological', 'spiritual', 'soul' and so on into a single basket, as if somehow they were all the same, i.e. parts of a dimension divorced from 'reality', which is then opposed to it as the 'hard stuff'. But just as a rock differs in significant features from a microbe, so 'mental' and 'spiritual' are different categories. What we refer to as 'mental' are states-of-reality which apply to animals as well as us (animals dream!) and are simply the neurophysiological responses of our body to the impact of 'reality' on us. Dreams are generated by the body, by the neurosystem as part of its homeostatic routine; but the dreams to which you might otherwise assign such notions as (e.g.) 'hope' are a different kettle of fish. Again, in language we usually fail to distinguish in the expression 'hope' a realistic expectation and the doodling of the mind.
But whichever you look at it, in the end 'reality' comes first. So 'reality', however experienced, precedes 'dreams', however defined. In dreams, waking or sleeping, you can do 'what you like', but God help you if you try to do the same 'in reality'!
Though it can't be proved I believe that the world exists. Existing in the dream of an unknown being is no fun, so I refuse to believe that..
But reality IS a fantasy, that is you can shape it to your own likings. However it is practical to share a big part of that fantasy with others, otherwise you'll lead a lonely life (and often end in a mental hospital)
So death is for me not a dream but another fantasy. In many cultures just an accepted part of life. In Christian culture it generally and officially was made something absolute and a subject of fear (but many Christian priests have a comforting relative view on death).
This doesn't really constitute an answer but your question sounds very much like Morpheuses in The Matrix. I don't remember the exact quote but yours is quite close unless I've lost all short term memory. Which is all a good way of saying that there are two article that everyone should read:
David Chalmers 'The Matrix as Metaphysics' at http://www.whatisthematrix.com
Nick Bostrom 'Are you Living in a Computer Simulation?' at http://www.simulation-argument.com.
Both are really good though the latter is quite tough if you're familiar with probability theory (though there is a really good introduction to the argument that was published in the Times Literary Supplement on the site) and the former does get quite technical (though Chalmers' ever impressive writing style makes things very clear)
And everyone should go and see that new Matrix film, (Matrix Reloaded in case you've been reading Kant in your room too much get out!)
This might not answer the 'is life a dream question?' (which leads to interesting questions about how clever I must be (I came up with this?)) but I might help with the analogous 'are we in the Matrix?' question.
I recently got into a debate with this fellow on objectivism versus non-objectivism.
My position was that a Platonic reality exists for scientific, mathematical and moral concepts. That is I want to believe that there is a timeless, universal set of scientific, mathematical and moral principles that exist external to the human mind and are knowable by it.
My opponent disputed this, claiming that any such objective reality would be unknowable, and science is simply a calculational device used for making correct predictions. He also strongly disputed that there could exist a set of universal moral principles.
What does modern philosophy have to say about this? What would be a majority consensus view of things at present? And what texts should I read to get a good grounding in the basic arguments for and against?
Your question about consensus on these questions is extremely difficult to decide. Some time ago Philosophy Now magazine did a survey on what students of philosophy tended to believe regarding ethical objectivity and what philosophy teachers tended to believe. The result as I recall was that students tend to be non-objectivists and teachers tend to be objectivists. However, in general there are likely to be more philosophers who are sceptical about ethical objectivism than mathematical objectivism or scientific objectivism.
However, we need to get a little clearer on how to couch the debate between objectivism and non-objectivism as these terms can be a little slippery and through the course of history they have changed their meaning. Hence the modern debate over these questions tends to be couched in terms of two positions called
'Response-independence' and 'response-dependence'. The motivation for couching the debate is due to a general acceptance of a distinction between primary and secondary qualities (derived from John Locke). Primary qualities were those properties of objects that existed independently of our responses e.g., shape and secondary qualities were those that were dependent on our responses e.g., colour, warmth, taste.
This way of setting the debate up offers us sharp distinctions between judgements concerning colours such as 'The carpet is red,' and taste such as 'Beer tastes bitter' and judgements concerning shape such as 'The pebbles are round.' The response dependence of the judgement, 'The carpet is red' is explained by saying that the truth conditions of the judgement are not independent of our responses, that is they are partly constituted by our responses. The response independence of the judgement, 'The pebbles are round' is explained by saying that the truth conditions of the judgement are independent of any judgement that we could possibly make about the pebbles. That is to say they would be round even if we had never come across them, or they are mind independent.
Not everyone accepts the distinction between primary and secondary qualities and you have to have one foot in the objectivist camp at least for some judgements in order to make the distinction. Some philosophers do hold a global response dependence view of our judgements but with the distinction in terms of the truth conditions of the judgement we can see what they are arguing about. The great difficulty for those who hold the response dependence view of judgements consists in saying exactly what responses are equivalent to the truth of the judgement i.e., what judgements cannot be false. Most are elusive on this question and it is a weakness in the theory.
Controversial areas in science concern theoretical posits or unobservable entities but there is no need to see these are being response-dependent. Many scientific entities might not be directly observable without being constructed out of our responses.
With the above in mind we can now turn to maths. A response dependence view of maths looks initially attractive because we may be suspicious of attributing mathematical sets to a response-independent reality. However, certain mathematical theorems like Godel's theorem look like they are either true or false and there is no possible judgement on our behalf that could make it so because of our limited ability for determining the truth of the judgement. Much of mathematical breakthroughs only make sense on a response-independence view of the subject matter. The possibility of our best judgements being false is what the objectivist or response independence theorist has as his main foil against the non-objectivist or response-dependence theorist.
Turning to Morals the matter is a little trickier. There is a distinction to be made between objective moral or value facts and objective moral principles. Basically some philosophers like Richard Hare hold that you can have universal moral principles without objective moral or value facts. The two are likely to be more successful if they go together though. Moral judgements such as, 'Inflicting wanton cruelty to animals is morally wrong' look like they have truth conditions that are independent of the subject who is making the judgement. That is the truth of a moral judgement is not to be decided by the person making the judgement. This looks like a conceptual truth it is what differentiates moral judgements from judgements of taste. However, the truth of the above judgement does not look like it is going to true independent of all responses it is not going to be true independent of the capacity for the animal to feel pain or to suffer. So there is a sense in which moral judgements are both response independent since they do not concern the speaker's responses, and response dependent since they concern the responses of the subject of the judgement. (the subject of the judgement in the above is 'animals' and is not to be confused with the subject making the judgement i.e., the speaker).
If you see a Platonic reality as an objective reality, and approach objective reality as a reality that exists independent of our judgements about it, then it seems you have a good case for making it with regard to maths and science, but with ethics we have to be careful about the scope of this distinction. All of the above would be regarded as objectivist positions.
I would recommend reading the arguments of some non-objectivist philosophers in order to see what their motivations are for adopting such a position. That is to say that most philosophers take response dependence views of subjects because they see a problem with the response independent view.In this way once you remove the obstacles for your opponent they should fall in line with a form of objectivism.
J.L. Mackie Ethics Inventing Right and Wrong is a nice little book by someone who challenges objectivity in morals and distinguishes between objective moral or value facts and objective moral principles in his opening chapters. Mackie sets out what it would look like for there to be objective values i.e., everyone's happiness would count equally when making moral decisions but he rejects this view because of a clash between Platonic or Kantian conceptions of morality entailing reasons for action and Humean conceptions of reasons for action. This is one of the main debating points in contemporary meta-ethics so if you can find a good way around it then you will be able to defend your position form likely critics. Also try David O Brink Moral Realism and the Foundation of Ethics for support.
What is Objective Idealism? Is it considered a tenable position today?
Idealism is a complex subject with several facets, Objective Idealism, better known as Absolute Idealism, is one of them. To come to some understanding of what is a fairly obscure concept, it is perhaps advisable to briefly consider the development of idealism from Berkeley to Hegel. Very often when we refer to development in philosophy, it must not be regarded in an evolutionary sense, it simply means that someone has added a new idea to what has gone before, or maybe has substituted their own idea for the previous one, but none of it can be said to fully supersede what has gone before. Take for example the graded progress to Absolute Idealism, from Berkeley's Subjective Idealism, through Kant, Schiller, Fichte, Schelling, Schopenhauer, and finally the total Absolute in Hegel. No development has completely eliminated what has gone before, and we find that there are supporters of each variation of idealism who will not modify their enthusiasm for the variation they adopt. Hence, what we find is a range of alternative approaches to a difficult question; What is reality? or; What really exists?
I obviously cannot go through a detailed history of the development of idealism here, but I will try to construct a brief indication of the general trend towards Absolutism. You can learn more about each of the philosophers mentioned and their ideas, by reading about them in a good encyclopedia of Western Philosophy.
Idealism is a term originating in the concept of ideas in the mind. Idealism does not quarrel with the naive view that material things exist; rather, it disagrees with the analysis of a material thing that many philosophers have offered, according to which the material world is wholly independent of minds. Berkeley asked how an observer who was aware of nothing but his own ideas could know anything about an external world. The situation is made more absurd when we realise that senses can deceive us, i.e. a sense can present us with alternative ideas about which we have to rationalise to obtain what we might call the correct choice. As there is no way of proving the presence of an external material world, why should we presume there is such a presence? It is more likely that the only world we can justifiably accept is an internal world of ideas. Things that exist are things that are perceived, when no human mind is perceiving an object, we have to presume that it continues to exist because God is perceiving it.
Unlike Berkeley, Kant did not refute the notion of the existence of things outside the mind. However, he believed that we could have no direct access to what was there, all we could be aware of are representations by way of the senses, mere shadows or phenomena of what could exist, which he called things in themselves. To make sense of the phenomena we receive, the mind adds a priori knowledge, knowledge in a way gifted to us by nature, to form mind constructs. Thus, the popular notion that the mind conforms to objects in the world is reversed, and, according to Kant, objects conform to the mind. The world out there is called the noumenal world, the things in themselves which constitute the noumenal world are thinkable but not knowable. Kant called this doctrine "transcendental idealism."
Fichte, though influenced by Kant, could not accept the notion of things in themselves. He asked, how we could actually postulate hypotheses about a noumenal world that we knew nothing about, and for which we had no proof whatsoever that it existed at all. He decided that the noumenal world had to go; there could be no grounds for asserting something quite unknown, and no meaning in doing so. After this rejection we are left with just minds and objects of experience. Fichte developed the idea further by referring to two parts of mind, the I and the non-I, the I observes what goes on in the non-I, thus eliminating an outside objective world. The I is considered subjective and the non-I objective. The I is what the Greeks might have called the soul. So we have now entered what Fichte called "Absolute Idealism."
The development of absolute idealism proceeded through Schelling, who introduced a spiritual concept, to Schopenhauer, an atheist who considered the absolute to be the will, this he considered to be the ultimate reality. Absolute idealism comes to fruition in Hegel. The absolute for Hegel was the Universal Mind, an interpersonal consciousness. Berkeleian subjective idealism and Kantian transcendental idealism, construe reality in terms of the content of individual minds, absolute idealism on the other hand, tends to construe it in terms of an interpersonal consciousness. The distinction between one 'self' and another tends to lapse, leading to a form of monism, according to which there is only one thing, the mind divided up into appearances. All reality is in the mind, there is nothing outside it.
Complicated stuff, but I trust you will grasp the general idea of what absolute idealism is about. Yes, it is considered a tenable position today by some philosophers. In fact, idealism in general is experiencing a revival. Oddly enough it is receiving a boost from science, particularly physics, which no longer sees the world as a great machine or technical construction, but is seen by many as a great 'thought!' Matter keeps disappearing and re-appearing under their very eyes. Personally, I can only make sense of the world by way of the Kantian idea of mental constructs, but, like the absolutists, I find it difficult to conceive a noumenal world. Like Bradley, I am out on a limb with the notion of the mind contemplating itself, the real absolute!!
Hey, I was wondering about a couple philosophical questions:
For one, who has the right to tell someone else what to do? I mean regarding laws and rules. Also, I would like to hear a philosophical argument of an ongoing controversy. Any kind, but I'm tired of hearing of free will/ determinism, and proving the existence of God. Thankyou.
As far as your first question goes... first, I'm not sure what the word "right" means, especially in this context. But let's take a couple of scenarios relating to "rules".
Children: children are incompetent to deal with the world. Period. If you've seen a young child, then you know this point is not even worth discussion. Ok, so then their parents have the duty to guide them, and this includes, when necessary, telling them what to do. Now you might be saying, "ok, fine, but I'm not a child any more, I'm 12 (or 15 or 16... or whatever) now". Who is competent to judge your competence? If a 5-year old says that, you say, very gently, "yes, you're a big [boy or girl] now"... and continue telling them what to do, right? So when do you (the parent) stop? When you judge you can, gradually. When is that? Um... obviously I have no answer to that. That's something that has to be worked out, usually painfully, unfortunately.
Disabled: what about intellectually disabled people? What about emotionally disabled? We tell them what to do, right? As little as possible, but still that must be done to some extent.
Incompetent: what about when you're in a situation where something must be done but you don't know how to do it, or don't know well enough for that situation? Then hopefully there is someone around who will tell you what to do. And you'd better do it, or someone will die... if, say, you work in a hospital and a doctor is telling you what you must do. Or you have to survive somewhere and don't know how.
But, you say, these are extreme situations. Yes. But I'm using them to set up a baseline, so to speak. From that baseline, commands, advice, hints, etc... shade off to the other extreme where you are telling a child, for example, what to do. If you want some black and white solution here, forget it. Each situation has to be judged on its own merit.
I'm not going to tackle "laws". I see them as cultural or societal extensions of the above; but I'm sure others will have other points of view.
The second question: well, just think of how tired I am of that. Here's one source of questions:
Tye, M. Ten Problems of Consciousness: A Representational Theory of the Phenomenal Mind. Edited by H. Putnam and N. Block. 2nd ed, Representation and Mind. Cambridge, MA: The MIT Press, 1996.
You might also look at "the new problem of induction", and Goodman's exposition of that. A nasty problem.
There are also open problems dealing with, for example, essentialism. Are there essences, in an epistemological sense? A metaphysical sense? Believe it or not, this is a very interesting and important question which relates not only to epistemology but also to cognitive structures.
Philosophy of language: see the Pinker/ Langacker or Chomsky/ Lakoff positions: sophisticated nativism vs. sophisticated cognitive-developmental positions.
Philosophy of science: Kitcher vs., say, Lacan or even Derrida.
I mean, once you get past the basics... the necessary 3 to 5 years learning what to learn about the issues and how to learn about them, it gets very interesting and complicated. But you are wanting to do the equivalent of reading journals in mathematics or physics without learning the language and background. Or reading articles in genetics with a bare knowledge of what DNA is. You can't do it, sorry... there's background you simply have to know to understand, much less participate, in these issues. But. The issues are out there, they're just not easy to grasp, any more than technical issues in physics, genetics, or mathematics are easy for the layman to grasp.
Steven Ravett Brown
Is there a difference between a "Fact" and a "truth"? I realize that some people use the terms interchangeably, but I wondered if there was a logically necessary distinction. I reasoned that the difference between them is that "Facts" are always true. Truths are temporary. For example, "George W. Bush is President of the United States" is true only within the length of his term (let's say 4 years). To make the same statement 8 years from now the truth value will be false. But, "George W. Bush was elected president of the United States in 2001" will forever be true. Is my distinction between "Facts" and "Truth" reasonable or faulty?
I think you are wrong in what you're saying. Right now, "George W. Bush is President of the United States" is a fact. It is also true. In, say, 10 years, it will be neither. So that example is not correct.
One way to test a statement is to put it through logical variations:
Statement: If P then Q.
Converse: If Q then P, which is not logically equivalent to the first statement.
Inverse: If not P then not Q, which again is not logically equivalent to the first statement.
Contrapositive: If not Q then not P, which IS logically equivalent to the first statement.
So let's look at the statement: IF something is a fact, THEN it is true.
What do we get out of that? Well, if that is true, then the contrapositive is true: IF something is NOT true, THEN it is NOT a fact. Let's test that. "Unicorns exist" is not true. It's also not a fact, right? Or is it? We know that unicorns do not exist, at least in any normal sense of that term. Is "unicorns exist" a fact? No. So the contrapositive works.
Ok... the converse: IF something is true, THEN it is a fact. Well, we might make a distinction here between abstract mathematical propositions versus statements about the "real world" (and I'm not going to deal here with making that latter term clear). That is, if we say something like "the square of the sum of the hypotenuse of a right triangle is the square of the sum of the other two sides", which is a true statement, would we call that a "fact"? After all, there is no real triangle for which this is true, it's only really true for ideal right triangles. And so it cannot be a fact. I could find, I'm sure, even more abstract statements in higher mathematics which were true but not facts... and indeed one can fairly easily create logical systems in which true statements, statements which were consistent with the logic, and provable within the system, would be false in the real world, i.e., true but not factual. Thus I could say, "there is a world where like electric charges attract and unlike repel". It would follow, then, that atoms, etc., could not consist of clouds of electrons around nuclei... since the electrons would attract each other and the whole atom would collapse. Given the assumption, those are true conclusions. But there are no facts there.
But if that's the distinction we can make, then we must say that a "true" statement refers to any correct statement, while a fact refers to any correct statement about the real world. That seems a reasonable distinction to me... One could then get into some very bizarre discussions about what is real and what isn't... which as I say I'm not going to touch here.
But that should give you a starting point, anyway, for thinking about this kind of issue. There is a literature on "conceivability" and on "contingency" and on "counterfactuals" which you might look at, although it's not easy reading. There's also this brief exposition on the subjunctive:
And on counterfactuals:
Steven Ravett Brown
The word 'fact' derives from the Latin and has a very precise meaning (which in our modern languages tend to be somewhat obscured, simply from habits of usage). It means 'something that actually occurred.' Philosophically one may include objects existing in that definition, because it is legitimate to speak of objects as 'occurrences' in the sense that they are local concentrations of the 'event spectrum' of the universe.
In the very narrow and limited definition of truth that applies, say, in information technology, where a value may be deposited in a memory site (TRUE/FALSE), this 'factuality' becomes a purely operative mode. The system containing those value does not 'know' whether a value of 'true' is truly true. There is some similarity here to the old form of syllogisms, where you can put up a nonsense maxim and have the syllogism running through to its nonsense conclusion, for as long as the logic of the operation is satisfied, no hiccup occurs. Accordingly (in syllogisms) it is the duty of the philosopher to ensure that the maxim is (as they used to call it) a 'self-evident truth', such as for example, 'Socrates is a man' and then go on from there. But of course, humans can be very simple minded; and especially in the middle ages, many 'self-evident truths' were put up for syllogistic reasoning of which one might say that they were very far from being self-evident. Now in relation to information processing systems, similar rules hold: the attendance of an intelligent agent to control the 'factuality' of the truth conditions being tested is required. Clearly if a value of True is being deposited in a memory location, this value says nothing whatever about the truth or falsity of the condition which led to that value being deposited, for as in the case of syllogisms, the device is responsible only to the operative logic, not its factuality or truth.
>From this, at any rate, you will deduce the one important criterion which separates fact from truth. A fact is an occurrence that may occur without any human agent knowing about it; but if a human agent knows about it, then that agent is responsible for assigning a 'value' to it, e.g. by reporting it. If the report stands up to scrutiny (for example, if it concerns an earthquake that can be independently checked), then the fact and the truth coincide (as in your example of President Bush) and any claim to the contrary will then bear the stigma of 'untruth'. Other conditions may prevail to qualify that truth. The witness may have confused the date on which the event occurred, but this only means that an error corrupted some aspect of that factual truth without impairing its essential content. A lot of history writing is concerned with just such issues, and historians are constantly required to evaluate testimony which may be essentially true, but deficient in one or another facet of this truth (e.g. the reigns of Egyptian pharaohs, which often overlap because apparently the Egyptians did not always rigorously separate the life span and the actual reign of a king).
But many truth situations in human commerce relate to truth which is not tied to events and the testimony which confirms their factuality. It is fairly clear, without delving deeply into the philosophical merits of 'truth', that FACTUAL TRUTH is always conditional. In your example of President Bush, the factual truth about his term of office can only be established when it actually ends; any statement made before that event is not an 'untruth', nor even an 'error', but just a verbal utterance without meaning content. However, a fiction writer may, for purposes of their own, pretend that Bush lived to the age of 120 and remained President for 50 years. This is where the concept of 'truth' becomes difficult to handle. The writer may be writing before or after the President's death; in either case the improbability of this scenario is manifest; yet if the work we are discussing has claims to be regarded as a great work of art, it may show a 'truth content' which transcends the simply fact-truth relation that I've discussed so far. In other words, 'human truth' need not rely on factuality, but does in fact have much more stringent (ethically determined) values associated with itself. The example I've just used recurs in innumerable instances throughout literature, art, opera etc. What merit of truth is contained in Shakespeare's Macbeth? Clearly the yardstick of factuality is inappropriate here. But you may hear it said quite often, about such figures as Macbeth, that the 'truth' about Macbeth, even though it may be 'false' and would be recorded as 'false' in a time machine, is 'true' in a more humanly relevant context. There is an old adage which occasionally pops up in contexts such as these: 'Even if the deeds attributed to this person were never performed, they should have been, because they reflect some intrinsic aspect of that person's character.'
This is the point at which the philosophical concept of 'truth' takes over. Just a few examples:
Truth is profoundly involved in the concept of justice.
Truth has a bearing on aesthetics, i.e. in the relation between art and a very dimly perceived ('inarticulate') truth content.
Truth and morality are inseparably entwined in religious and social interactions.
Truth is ingrained in something we call 'character'. What a person is, deep down.
Truth and factuality may collide in ethical situations: such as a doctor diagnosing terminal cancer and being of two minds whether or not to communicate this to the victim. Here the 'truth' is not (as one might suppose) the illness or its terminal conditions (they're the 'facts'), but the attitude of the doctor and/or those whom he/she consults about the merit of communicating the diagnosis.
There is no need to go on, because your question is limited to what I have discussed above, i.e. the difference between fact and truth. From this, you should take away the fairly important distinction between the two, and I hope that the outcome is a 'truth' in itself, namely that the concept of truth is considerable wider than the concept of fact; that indeed to some extent it includes the concept of factuality as one of its aspects. But, essentially, that 'truth' relates in the first instance to the human agent, without whom there would not be such a concept; and that accordingly it relates most deeply to human issues, where (unlike the fact-truth relation with its essentially linear logic) the concept shows up in its full complexity.
The sentence 'GWB is President of the US' is only true during his term. But the fact that GWB is the president of the US is only a fact during his term. In order to make both 'always true' or 'always a fact' will involve incorporating temporal notions: Its always true that 'GWB was elected in 2001', but similarly, its always a fact that GWB was elected in 2001. More technically, Tarski's disquotation schema has it that:
DS: 'P' is true if and only if P
For example 'snow is white' is true if and only if snow is white. Hence, there is a direct link between facts and truths. Whenever you have a truth you have a fact and vice versa. If you still want to make the case there is a difference then I guess an intuitive difference might be that the truth predicate only applies to sentence, whereas facts are things 'in the world'. You could also say that facts are what make sentences true. Facts, in that sense, would be the truth-makers for the true sentences.
Some years ago I was reading the London Evening Standard on the tube train and stumbled upon an article on education by A.J. Ayer in which he said, "All education is indoctrination." it struck me as an absurd thing to say then and still does. But is there more to this assertion of Ayer's than meets my jaundiced eye? and what could it be?
Yes it is absurd and no it is not.
It is absurd, because education is inevitable. So all can't BE MEANT as indoctrination. And what is more, it is needed, it is the function in nature of all parents to teach their offspring to survive.
It is true, because without intending to all teachers become little gods to their pupils. They can't help teaching their pupils things that are only tradition. That's why after some time pupils must go their own way. Only 100% computers can be given most knowledge to survive at 'birth'. Then after that even they must learn from experience to improve.
Two meanings of "indoctrination" are given in my dictionary, as follows:
1: to instruct especially in fundamentals or rudiments: TEACH
2: to imbue with a usually partisan or sectarian opinion, point of view, or principle
In meaning 1. "indoctrination" is nearly synonomous with "education" except that it has a more confined scope than the latter since it concerns the elements of a particular subject, as in "children are indoctrinated into the fundamentals of arithmetic."
But, in meaning 2, of course, the term to educate is very different from to indoctrinate, since education is supposed to present students with information and ideas without any attempt to present them with any partisan or sectarian opinion.
Ayer, it seems to me, was playing on these two meanings so as to get his own view about what was actually going on in the schools as opposed to what he thought the schools should be doing. They were indoctrinating rather than educating which was what they ought to be doing. And, so, in a way, Ayer was, himself, indoctrinating the readers of The Evening Standard rather than educating them. Of course, I never read the article you are referring to, so I can't know that what I say above is true.
Philosophers often use the device of saying something paradoxical in order to emphasized a particular viewpoint on a matter. In Plato's Republic for instance, Thrasymachus tells us "Justice is in the interest of the stronger." Now, that is exactly what justice is not. But, by putting it that way, Thrasymachus gives us his view of how, in fact, the notion of justice actually operates in society.
Ayer, I think was doing very much the same thing, only, of course, concerning his own, perhaps jaundiced view of how people are educated in Britain. He might be understood as saying, "We are supposed to be educating people, but what we are doing is indoctrinating them."
Just what do you mean by the term "indoctrination"? What did Ayer mean? I haven't read the article, but given that he is a philosopher, he must have defined that term somewhere in the paper. Did he mean what you mean by it?
That's point one. Point two is this: you're a child learning, say, mathematics. Now, mathematics, real mathematics, is not addition, subtraction, etc. The closest one comes to doing what mathematicians do, while one is in school, is when one learns to do proofs in geometry. Mathematicians do proofs, for the most part, in extremely difficult conceptual areas... and attempt to think up new things to prove. Now. What must a child learn, and how must they learn it, in order to even get to a point where doing mathematics is at all conceivable? They must learn arithmetic. Can they learn it by doing proofs, i.e., by proving, say, that 2+3 = 5? No, of course not. So they must first learn, by memory, what addition is. Then how to add. Then facts like 2+3 = 5. And on, and on. Then at some point perhaps they will find that they want to and have the ability to think of tentative mathematical truths, and prove them correct or not. So the first stage is memorization of the basics.
Do you see where I'm going with this? The child is, effectively, indoctrinated with the basics of mathematics, i.e., they must learn something, and accept it, without fully understanding it or being able to question it. How else would you proceed? This is true for pretty much all fields, even to a certain extent for philosophy... although that latter might possibly be the exception, if one were of a sufficiently Socratic bent. But even there, you will find tricks in the Dialogues which amount to the same thing. In physics, one learns about forces, vectors, electromagnetism, and so forth, without being able to question them. In medicine... etc., etc.
However, once one gets past the point of learning the basics, then one can start to question what one is learning; one has the tools to investigate why 2+3 = 5, and so forth. So I would agree with Ayer up to a point. Past that point, I would not agree. The difficulty, of course, is determining how far one can go before one starts playing with what one is learning. To read one must be "indoctrinated" with the alphabet, with grammar, with a basic vocabulary... an then one can make up one's own words... in the proper context. That context comes at different times for different people, depending on ability, interest, etc... And of course there are cross-influences between some fields that enable one to immediately question something in one if one has already learned another.
Steven Ravett Brown
Should a philosopher always take the most clear and direct approach when writing a piece of philosophy? Now I've given this some thought and it seems to me that those who insist on complete clarity at all times are oversimplifying the issue, especially when it comes to the question of interpreting the thoughts of other philosophers.
For example, how would one go about giving an accurate account of Heidegger's Being and Time if it's insisted that this account must be explicated in everyday terms? To explicate Heidegger in 'everyday terms' would do a great disservice to his thought, and to those trying to get an accurate understanding of his thought. If one is serious about explicating the ideas of a particular philosopher, then one must employ the technical terms used by that philosopher, for otherwise philosopher's arguments get leveled down to a vague semblance of their original form. Should one always try to define these technical terms? If one is writing a serious philosophical essay on a particular philosopher, shouldn't one assume that the potential readers of this essay will have some understanding of the philosopher the essay is about? Or should one simply assume that everyone is completely ignorant of the philosopher in question? Should one be giving a primer on philosophy each time one decides to undertake a philosophical project?
Furthermore, it is my contention that, if you're after a specific effect, the only way to get that effect is to eschew the notion of complete clarity. Let us take the topic of aesthetics as an example. Often I find that if I want to give a non-reductive account of the aesthetic experience, I actually have to perform an action in my writing that produces an aesthetic effect. I could ramble on and on about poetry in a style as transparent as Ayer's, but I would fail to capture the essence of the poetic. Now you may insist that my argument is fallacious if I can't give a completely transparent account of the essence of the poetic, but I think this is false because, if one can actually write poetically about poetry, then this serves as a more convincing argument than simple logical deduction. Logical deduction understands nothing of the language of poetry, but you can be sure that poetry understands logical deduction perfectly well. Now this is not to say that logic is unimportant. That would be an absurd statement for a philosopher to make. But logic can only reach a certain point in explicating the poetic, and once this point has been reached, we are left with a remainder: the quiddity of the poetic. To reach this zone it's necessary for one to ignore the limitations of strict logical argument, and proceed with a performance of the poetic.
Lastly, there are many great philosophers who are not 'clear and direct'. How 'clear and direct' is Kant's Critique of Pure Reason, or Hegel's Phenomenology of Spirit, or Heidegger's Being and Time, or Derrida's Of Grammatology, or Nietzsche's Thus Spoke Zarathustra? Should we ignore Sartre because of statements like, "I am what I'm not, and I'm not what I am"? Must all philosophy be judged under the doctrines of the Anglo-American tradition of complete clarity?
Your question is very pertinent and well argued and you have my complete sympathy. Somewhere in his lecture cycle on 'Philosophical Terminology', Adorno takes Wittgenstein to task for his assertion that we shouldn't and cannot talk about matters that we know nothing about: It is precisely the office of philosophy to do that, Adorno retorts. Language is an extremely imprecise communicator of almost everything of value to human beings (that was one of Wittgenstein's points), but this only means that in writing down a possible very complex argument or raising issues that are new to the philosophical vocabulary, a philosopher may find him/ herself unable to express what needs to be said in 'clear' prose. The essence of this matter is that it is part of the human equation to understand very well in non-lingual terms many complex forms of communication (e.g. symbols) which are also difficult or impossible to decompose into plain statements; and when philosophers write highly convoluted arguments, invent exotic nomenclatures or implicitly redefine standard expressions to suit themselves, this is often an appeal to the intuitions of their readers to fill the comprehensibility gaps by marshalling their own imaginations. Under these terms, philosophy can be become a creative exercise not only for philosophers, but for their readers as well.
This is not to say that clear writing is not a desideratum, ultimately. If you had the chance to ask Hegel, he would unquestionably agree. No-one could have been more sensitive to the deficiencies of his diction than the man himself; but he had important thing to say that he simply found himself unable to frame into 'clear prose'; he was always wrestling with language like Jacob with the Lord's angel in the service of precision of utterance and came out of the fray somewhat bruised and dishevelled. Indeed of Kant it is well-known that he expressed the sentiment that he was forced to leave elegance to his tailor, because he simply lacked the time to polish his text. Consequently it is not a valid counter-argument that men like Descartes or Nietzsche or Santayana wrote in prose to match the best of their respective literary languages. C'est le metier. What I mean by this is: any typical sample of 100 books on Descartes would be devoted to precisely the same task as any sample of 100 books on Hegel: elucidating the meaning of the authors. But didn't Descartes write 'clear and simple prose'? Well, yes. But so did Hegel, on his own terms. For although Descartes might be more readily served up in the Sunday Literary Supplement, what he really meant is no simpler to extract from his texts than anything Hegel had to say. In a word, until somebody takes up Leibnitz's idea of a characteristica universalis and develops it into a richer as well as more precise communications vehicle than plain language, we're stuck with what we got. So your point is well taken.
First question. Yes, a philosopher should aim for clarity, but should not accept absurd terms. 'Complete' clarity doesn't necessarily mean oversimplifying, but if sometimes terms of shortness do. When that seems awfully difficult, than that only means you don't master the stuff. My personal experience is that anything you really through and through understand can be explained in a few pages (or shorter). If you can't explain Heidegger in common word then you don't really understand his point (replace 'Heidegger' by any name).Then I don't mean Heidegger's mathematical views, but his philosophic ideas.
I'll explain: In SF movies sometimes in a few sentences, the subject is treated of a whole formal philosophy book. Not because of an extremely clever text, but because of the context in which that sentence is used (making use of the visual power of movies).
Second question: Yes, focus is always useful.
Third question: Be careful to accuse other philosophers of unclarity. Consider the time in which their article was written.
My experience is that for instance Kant seems at present unclear (he was clearly a product of his time), BUT considering his circumstances is quite understandable. Only his ideas have in the meantime been a lot improved. Nietzsche's Zarathustra is on close reading very clear. I even wrote a summary in which every chapter takes only a few sentences (if you're really interested I can give you the internet address). Mind that explaining something you only half understand takes a lot of words.
If it is only Anglo Saxon to demand 'complete clarity', then there sure be other ways to look at it No harm meant but that is slightly arrogant.
Remember, thing are as clear as your own eyes see them.
Since an aesthetic experience is an experience then I agree that you cannot capture it without giving an example. Furthermore, that example is likely to be a philosophical poem and so logic, I agree, would not be very important. But a logical poem might be quite fun.
Translation of technical terms into everyday language is an attempt to understand. You can assume that potential readers know something about the philosopher you are writing about, but it depends on the level of the essay. If it is an undergraduate essay you have to show that you understand the philosopher. If it is not, it is still a good idea to explicate the ideas of the philosopher since this allows readers to know whether you have the same interpretation.
It is the practice of showing that you understand which leads to the Anglo-Saxon requirement for clarity.
Something is always lost in translation even if it is just the tone of the original philosopher but it would be very restrictive if this was to bother philosophers too much!
Does anyone ever think that the world would have been better off if man had never taken, 'control of his affairs,' in the first place?
Evolution. Imagine what that idea would cause that man's image is of but a limited time and of no absolute fixed value or duration if it were generally known or believed. Do you think that man jeopardizes his future by feeling revulsion at the idea of his evolution, due to perhaps some childlike fear or immaturity? I can't imagine people being too eager to accept, perhaps eyes on the back of their heads as an evolutionary advancement, or whatever it might be. So perhaps in taking as many preventative and reversive measures as possible the fearful creature might destroy himself? Why? because everything evolves, everything improves.
Now if man prevents this natural improvement he will inevitably fall behind in the 'survival of the fittest' scheme of things. I do not mean to suppose that monkeys will rise up and overthrow their masters, as is the case in so much paltry science fiction, but think for a moment. What is the great problem in medicine today that is already worrying doctors, scientists and the rest? Is it not the evolution and adaptability of bacterial viruses and infections? If I had to spell out a certain problem I would say that evolution affects men, it improves him, in doing so it improves every organ in him, including his brain, not just improvement in his natural immunity. And an improved brain means better intelligence , intelligence to think new ideas, perhaps even unimaginable and extraordinary ideas to us today. Ideas which, none the less, he needs, needs to think up new ways to combat the ever improving threats to his existence. So, do I have a case, or am I just weird?
Very interesting question, Edward, deserving of a well-considered answer. Let me recommend to you, however, in asking important philosophical questions (not just to Pathways, but in general), that you never assume sight unseen that you're the first to put them. For example, 'Does anyone ever think... etc?' is an issue as old as the hills and there must be thousands of books and articles on the subject. By you putting questions in this rhetorical manner, I have to choose between the alternative of believing that you really have never seen an article or spoken to anyone who shares your opinion or that you are indeed just using a rhetorical device. I don't know which applies to you, and that makes it difficult for me to assess how to respond: you see the problem?
Anyway, I'll assume that it is rhetorical and that you're just looking for another answer because you've not seen or heard one that really satisfies you. This means that I can simply respond 'yes' and leave it at that.
And so now to the other parts. I'm sure your idea of 'revulsion' applies to some people, even whole societies. Maybe 'revulsion' is not the best word, but this is a minor consideration. We do constantly jeopardise our future; but this is not rooted in a fear of 'improvement'. The statement 'everything evolves, everything improves' is factually incorrect. For example, many species of bacteria have never evolved beyond their original state, and an argument may be put that creatures who reproduce by cloning are no longer evolving, and that their mode of reproduction is precisely geared to keeping the status quo going indefinitely. Further; many way stations along the path of evolution are not improvements for many creatures and/or branches on the tree of evolution. One may put the proposition that any species which is now extinct was not intrinsically an improvement on what went before. One may propose, even more radically, that homo sapiens occupies an evolutionary rung which has overshot the mark in terms of adaptability and is therefore very likely to 'write himself out' of the further evolution of species. The point of these deliberations is that evolution is not a sort of mechanism of progress, but rather an interplay between organisms and the habitat, in which the former adapt to the conditions which prevail in their niche and the latter changes on two fronts simultaneously, namely through the impact of organisms (which must inevitably change it) and the chemical composition of that habitat from time to time, of which one outcome may be that devolution is on occasion a preferable alternative. In other words, to think of evolution as an upward curve is a mistake. Evolution is neutral: and in the scientific literature you will find it stressed repeatedly that zero change is the rule of the game in stable environmental conditions.
Once you understand evolution in this light namely: that adaptability, not improvement is the key criterion of evolution, then you will be in a better position to judge the crucial issue of mankind's impact and the dangers involved in it. What you call 'improvement' is, in fact, the disposition of some types of organisms towards more complex evolutionary patterns, i.e. the development of more sensitively attuned response systems. Take the evolution of nerves as a paradigmatic example: millions of species have nerves and therefore a greatly improved resource of adaptive response to changes in the habitat over creatures without nerves; then evolutionary stress may induce a further evolution to a nervous system with control and evaluative facilities in a smaller number of species; from there more species will go on to evolve brains. Speaking generally, this is to date the topmost rung on the evolutionary ladder: fish, birds and mammals possess brains of varying size and resource capability. Along comes, in a kind of sudden upward push possibly beyond the needs of the species, the brain of homo sapiens, which displays a crucial change in the capability of brains-in-general. Brains-in-general evolved for the superior handling of short-term evolutionary changes, even instantaneous changes, i.e. changes where the time stamp is too short to allow the quasi-mechanical interplay between organisms and habitat that is the norm; but the human brain goes beyond that in that we can think of the future, i.e. events which have not yet happened, and generate plans and ideas and visions of possible tracks into the future against which we may wish to equip ourselves. One obvious advantage to this is that the creature so endowed is able to build structures, both 'hard' (material, so as to provide an artificial habitat which is to some extent independent of the natural environment) and 'soft' (societal and cultural, designed to facilitate the coherence and cooperativeness of the species in its efforts to survive). One disadvantage is that the animal instincts which we inherited are still in force and have a tendency to be productive of 'misreadings' of these possible futures in light of desires and short-term fulfilment of supposed advantages, all which change the habitat very quickly and thus create evolutionary conditions where we as well as many other species on which we depend on our survival, are endangered.
If we accept the reasonable conjecture that ultimately homo sapiens is the survivor of an arboreal simian (ape-like) branch which is now extinct, then we can see easily by comparing the life habits of other arboreal mammals (e.g. monkeys) what our problems may be. For example, we have no instinct for cleaning up after ourselves, because our instincts were formed in the trees; we have no instinct for curbing our natural aggression, because in an animal lacking 'tooth and claw' that aggression is in the main designed to frighten rather than to kill; and so one could go through a long list of bad outcomes of the evolution of certain simians into hominids. These outcomes are a result of instincts already formed and genetically transmitted which have not had the time to adjust properly to changed living conditions let me point to our eyes, whose stereoscopic ability is a reminder to us that once we needed that sharpness of vision to cope with brachiation. Against all these defects and maladjustment, our brain is the only makeweight: but our brain is heavily influenced by this instinct legacy which we carry around with us; and this is not a problem we are likely to solve in the short term. But it is a problem of which we have been aware ever since Darwin started the evolutionary ball rolling, but which as a whole we have never yet had the courage to face squarely. Instead, we've had two world wars, nuclear bombs and pollution near to suffocation level.
So as all the old religious and philosophical stand-bye's have it, the potential for good and bad has been placed into our own hands; we are the 'husbands' of the earth in the sense that as consciously aware creatures we bear an enormous responsibility far extending our own needs, for every decision we make as a collective affects untold numbers of other creatures and the vegetative world as well. The danger we are facing most acutely is that our perceived and imagined needs will outrun the capacity of the planet to sustain them, but equally deleterious, that many of those organisms which we perceive as pests, nuisances and dangers have the same 'right' to existence as we do (although strictly speaking no-one has a 'right' to live, only the privilege), but that from sheer ignorance we are likely to erode much of that hardly-perceived life on which our own depends.
To some extent, then, your concern is surely well-founded, but the presuppositions by which you judge them are still a little off the mark. You're not alone in this; but since your question revolved largely around questions of evolution, I have concentrated on this to hammer home the point that with all our 'knowledge' and acceptance of the idea, we have not yet, by a long shot, come to an acceptance of what is entailed in this knowledge. We have not yet, as you'll surely agree now, even come to an acceptance of such a simple fact as the incompatibility of our instincts with the need for co-operative living in the terrestrial mode which was probably forced on our distant ancestors by the cyclic recurrence of forest recession. One day, I guess, we'll be forced to; let's just hope that when it happens, it will not be too late!
I recently heard someone say, "I might easily have been someone else after all, mightn't I?" The obvious question is, "Might he have been?" Any thoughts.
First of all, we have to get clear what,
(P) 'Individual x might have been individual y'
means as the truth of (P) is going to depend on what the context of utterance is. I'm going to assume that (P) means,
(P1) 'It is possible that x has different properties than x actually has.'
Let 'x' refer to you. So what (P) means is that you might have had different properties. So, say, you might have had the properties of being a professional footballer, whereas actually you are, say, a professional basketball player.
In terms of possible worlds this turns out as:
(P2) There exists a world w & At w, Ian exists and Ian is a professional footballer.
Well, you might say "look, that's all well and good but surely I can only exist in one world whoever that other "Ian" is, it certainly is not me!" What we need to then do is talk about counterparts of you. A counterpart of you is an individual to whom you are similar too in some qualitative respect. Hence, P turns out as,
(P3) There exists a world w & at w, there exists a individual y & y is a professional footballer & y is a counterpart of x.
(Remember 'x' refers to you.)
So, in answer to your question, yes you might have been someone else, but what that means is that you have counterparts who are different from you. You might have been David Beckham but that is only true if David Beckham is one of your counterparts.
I understand that all this is controversial, but any answer to this question is controversial and I think this is the best answer overall. I don't have time to go into details but see David Lewis, On the Plurality of Worlds (1986) chapter 4, and John Divers, Possible Worlds (2002). Feel free to email me with any questions.
What exactly is it to imagine that you might have been some other person from the person you actually are? is it necessarily the same as imagining that you might have been different from the way you actually are?
Whenever we think about how different things might have been from the way they are, we are thinking about other 'possible worlds'. No need to worry about how 'real' these possible worlds are (Lewis takes the extreme view that other possible worlds are as real as the actual world). If you like, it's just a convenient way of talking, nothing hangs on this so far as your question is concerned.
I am now thinking about another possible world. In this other possible world, I have just intercepted a careless pass from an Arsenal player and I am racing with the ball towards the Arsenal goal. Other information about me: my name is David Beckham, I play for Manchester United, I am married to Victoria who used to be 'Posh Spice' from the Spice Girls. In my garage, I have a Bentley and a Ferrari. So far, so good.
But now there come a tricky question. How did I get to be 'David Beckham', when my father's surname was Klempner? Two possible answers to this: 1. In the other possible world I changed my name (better name for an English footballer). 2. In the other possible world my parents were not my actual parents but were in fact Mr and Mrs Beckham.
If I opt for 1. then I am not imagining that I might have been David Beckham, I am only imagining that I might have had the name David Beckham (as well as various other enviable attributes). In that world, one might suppose that there are two David Beckham's, myself and the son of Mr and Mrs Beckham (who hated sport and became an accountant).
So I opt for 2. How did my parents come to be Mr and Mrs Beckham instead of Mr and Mrs Klempner? What the question is asking is what connects me the person writing these words at this moment, to the person called 'David Beckham' in this other possible world. What makes this individual (in Lewises language) my counterpart?
This isn't a point necessarily about the first person. It makes sense to ask (although no-one ever would) under what circumstances, and in what sense might George W. Bush 'have been' Saddam Hussein, when the person asking the question is neither George W. Bush nor Saddam Hussein. I'll leave you to work out the details.
To get back to Mr and Mrs Beckham. Let's say that in this other possible world, when the embryo which later developed into me was just a few days old, it was secretly removed from my mother's womb and placed in the womb of Mrs Beckham. Would that be enough to make me David Beckham? All I am imagining now is my being substituted (!) for David Beckham. The embryo which would have grown into David Beckham was either destroyed, or maybe became his non-identical twin brother Derek...
First let me begin by thanking those who provide this service. It is very helpful.
I am a doctoral student in Instructional Technology. If I wanted to read more about how classic thinkers viewed the creation and use of tools and technology by human beings, where would I begin? Who should I read first?
Let me just explain why I ask this question. I am concerned with the development of theory in my field. We discuss a lot of application, but since my field is relatively new, solid theory has not yet been developed (in my opinion). I really don't have a background in philosophy, so I hope my question is not too strange. I had a thought that perhaps I can gain some ideas about the development of theory in IT from reading some classic thought in philosophy on tools, technology, and/or theory and knowledge creation. The reason I have this idea is because many other fields have beginnings in philosophy. The philosophical questions that the classic thinkers asked and discussed comprised the foundations of many of the sciences and social sciences we know today.
I hope someone is able to help me with this question and that if I am barking up the wrong tree, that someone would kindly let me know.
The real difficulty, I would say, is that thinking about technological issues could hardly precede the development of technology itself, and the use and theory of tools and implements and work methods is not, unfortunately, a topic to have exercised any philosopher on a more than very superficial layer. Invariably, when a new science begins, the most important issues to exercise either philosophers or scholars influenced by them are epistemological issues, and methodologies usually lag far behind at least insofar as they are elaborated and written up. I mention these difficulties mainly for the reason that I'm inclined to recommend some pertinent reading to you, but with the caveat that, I'm afraid, the risk of wasting your time is yours!
The best known of the classical philosophers who actually has a lot to say about these issues is Francis Bacon. I suggest you dip into his Advancement of Learning and The New Organon. Bacon is not shy about 'principles' and produces innumerable classifications and taxonomies related to the various branches of learning (in which what we call technology today forms a part). Now depending on your personal inclination, you could be bored stiff or fascinated about the prophetic genius that glimmers through the dim fog of a very primitive science. One way or another, you may find food for thought in this.
Now following on from this, Bacon was the inspiration behind the Encyclopedie of the French 'philosophes', headed by Diderot and Voltaire. I regret I've only read the latter's contribution plus a small handful of others, so my recommendation is restricted to recommending that you consult someone who is knowledgeable about the work. I suspect (but can't confirm) that something of value to you might be found in there.
Another work worth looking into might be Comte's Positive Philosophy. As philosophy it is dreadfully dated now, and probably just for the reason that so much of its matter or principle has been updated in the proliferation of methodologies. With Comte we begin in any case to overlap with the rise of autonomous scientific principles; and I doubt there is much in writings on the latter that has not since been improved upon. But just in case your interest is wakened, you might also dip into the writings of Hermann von Helmholtz, which give a fascinating and first-hand glimpse into the technical accomplishments of that era (cf. Science and Culture, Chicago UP).
Finally, for a purely scientific point of view, yet from the vantage point of philosophy, you can't go past the correspondence between Leibnitz and Clarke (the latter a mouthpiece for Newton). This might be aiming somewhat too high, but it can't hurt you to read at least a couple of exchanges to get a taste for what's going on here.
I would love to think that this helps. But though I remain dubious, let me add (speaking purely of my own disposition) that of the many ways to kill time, few are as interesting as these for their own sake.
I think you're looking in the wrong direction. You can read Heidegger until your face turns blue, but he won't help you in your work; in fact, he was very much a Luddite. You might look at Aristotle and the idea of techne, but again, I don't think it will help. The area you want to look at, in my opinion, is cognitive psychology. There's been a tremendous amount of work there on the various modalities of perception, manipulation, theory formation, and so forth... so much that I don't even know where to begin with references... I could give you 50, easily, right off the bat. Go to your Psychology department and find some people in these areas, or who can start you reading in them. Computer people on the whole do not know this literature. The problem however is that it is truly enormous, and you're going to have to do quite a bit of reading to extract what you want to know. I guess you could start with the (very old) literature on the tachistoscope and keep going from there, and take a look at the Stroop effect, and also I'd recommend (which you might know) the MIT robotics lab website... There's Gopnik on 'theory theory'... and of course the huge literature on cognitive development... I mean, I think you're doing the right thing, but you may have to (and I'm quite serious) get another PhD, or do the equivalent reading, to really learn this as thoroughly as it should be learned.
Steven Ravett Brown
Certainly your question is not strange:
About philosophy and technology I found two sites that may be interesting:
Technology and philosophy could help each other in more ways.
What do we mean when we call someone a genius?
As far as I know, the first use of that term was by Kant. He used it to refer to someone who, because of a profound connection with reality, was able to create new rules for constructing and understanding what Kant termed "aesthetic ideas". This connection to reality, to the noumenon, was unknowable, and the noumenon remained unknowable. The reason for this was that it enabled some few people to have, occasionally, free will, inasmuch as they were through that temporary connection able to free themselves from the causal patterns or laws, to put it roughly, of the knowable world. Now this is a summary of the motivation of Kant's Critique of Judgment, and for a 25 words or less statement I think it's pretty good. For a statement of the C of J, it's lousy.
Anyway, that's where the term started, as far as I know. It was, I believe, taken up by Goethe to refer to outstanding artists. And it took off from there. Today it means nothing. It's just a word someone uses about someone else when they admire them. So, in answer to your question, I have no idea what "we" mean... because the term really has no clear meaning any more. Different people mean different things by it. Does that help?
Steven Ravett Brown
The word itself has a long history of changing meanings: and in the early years of its usage it was often a simple synonym for cleverness. However, it was then mostly used in the form "X has a genius for...", meaning that X has a talent.
But in the inception of the German branch of the romantic movement, the usage of the term underwent a subtle shift. The presiding "genius" of that movement, Herder, used it in such expressions as, for example, "the genius of the language", where he is not referring to a person, but to the language itself as a kind of river that flows through the population and impregnates the people speaking that languages with its spirit. Herder was especially vocal in defending a then popular theory that the authentic poetry of a nation arises spontaneously and anonymously, long before individuals make it their business to "cultivate" language poetically. You can see here the connection between the original French meaning of genius (spirit), which Herder directly imported, and the connotation of authenticity. Both of these eventually converged in the extension of the notion to individuals.
In that definition, then, a genius was a person imbued with the authentic spirit of poetry (or 'poesie' which is the term they preferred to distinguish the authentic from the manufactured). The author of the Nibelungen Poem was such a genius, and it was rather a recommendation that he remained unknown. Similarly with the Edda poems of Norse mythology, the Beowulf etc. Incidentally you will find in these adumbrations the kernel of the later doubt that Homer was just a collator of old legends! Now Herder is almost unknown in the English-speaking world, but if you've read Goethe's Werther, then you might remember the almost hysterical ravings over Ossian and Fingal, which Goethe (on Herder's say-so) classed as "authentic" Scottish folk poesie (they didn't know it was a put-up job by one Macpherson, a second rate versifier!). Roughly at the same time he (Goethe) wrote an eulogy on Erich Steinbach, the architect of the Strassburg cathedral; and again the same notion of "genius" prevailed here, in that the master builder was an intuitively authentic embodiment of the gothic spirit.
The whole notion acquired the momentum of a cult in very short order (that time is still referred to as the Era of the Genius-Cult in literary history); even the old fogey from Konigsberg [Kant, Ed.] felt obliged, in his aesthetic treatise, to offer a sort of "definition" of genius along roughly those lines; and later his avid pupil Schiller worked out a comprehensive aesthetic (Uber naive und sentimentalische Dichtung) which made the distinction between the "authentic" and the "cultured" poet a cornerstone of poetic theory that remains to this day pretty much in force in German philosophy. One suspects that the overwhelming esteem accorded to Goethe owes not a little to Schiller's advocacy of him as such a (perhaps the only modern) specimen of an "authentic" genius. Thus the die was cast; henceforth the term "genius" became affixed to individuals of a particular creative potency.
It would not be long afterwards, that the imprecise and indeed indefinable notion of "authenticity" came tacitly under fire; and more and more of the creative types of the "sentimental" variety found themselves called "genius", even though according to the standard set by Schiller it was an inadmissible licence. Then, as the result of a natural attrition of exaggeration, to which especially the late stages of romanticism were prone, the value of the term genius became debased by over-usage; essentially it has become again what it used to be, a synonym for cleverness. Yet because enthusiasm for things romantic novels, music, poetry etc has never quite died down, the term itself is carried over and retains in these specialised contexts some of its old force of meaning.
So ultimately the answer to your question is: if used today in an everyday context, it probably means nothing other than "X is clever" or also "X is pretty stupid, but for some queer reason he's got a knack for stringing up pretty words; guess he must be a poet." But the old custom, as I said, has not died out altogether, and so it is still occasionally used to mean "X is really an exceptionally inventive/ creative personality".
I was most interested in Jurgen Lawrenz's statement in reply to my previous question, "The most important thing is that the universe know itself." Or at least this is what I understood your answer to my question to be. What is your basis for this statement? What if we had a universe that did not "know itself". What kind of a universe would that be and what would be wrong with that?
I suspect that I'm not telling you anything you don't know yourself when I say that philosophy is going through a very difficult phase. It has been, actually, for nearly 200 years, because that branch called 'exact science' has made such a powerful impact on civilisation that we are (and I include many philosophers in this) altogether in danger of forgetting the really important things. It is wonderful to have science and its daughter technology, delivering a lifestyle that is beyond the capacity to even dream of by the greatest kings and potentates in history: today, such comforts and achievements (like the Internet) are within virtually everybody's reach. But we have not made equivalent progress in the mental (psychological, spiritual) sphere, where in a sense we've remained on the level of an overgrown chimpanzee, as some writers are not shy of putting it.
That's a long preamble to your simple' question; but of course it isn't simple at all. It's purpose was to make the very important point that scientific research is a methodology, not a philosophy. The evaluation of that research should still be in the courts of philosophers but I'll be the first to admit that philosophers have on the whole turned their back on it and left us in the lurch by 'us' I mean us-as-a-society or civilisation. Your question is one of innumerable such questions that people nowadays address to scientists, in the belief that science, so powerful and all-mighty, must surely know the answer. The trouble is: they don't. It's in fact impossible, in-principle, for research to answer such questions, as Wittgenstein, who was a research scientist prior to turning to philosophy, demonstrated nearly 100 years ago.
I wrote all this, even though its seems to be marginal to your supplementary question, because it is a terrible mistake to give up on science, just because the temptation lies close to hand to ask the wrong questions of it. Science is like a sharp-eyed watchdog; many dubious ideas that were traded in philosophy for centuries have been exposed by the clear thinking that science demands, as figments. But some belief systems seem to be ineradicable; today more than ever ordinary people are addicted to astrology, parascience and whatnot. So it's important to think clearly about such recondite issues as your question entails.
Somewhere else in this Q & A segment I answered another question on this topic, which I recommend you seek out (just search for my name until it is indexed). I deal there with the possibilities of conscious life elsewhere in the universe and my conclusion is, that one cannot plausible exclude it because all matter in the universe is structured. It is (if you can accept this) an a priori condition of existence. No structure, no existence. (For us to be able to assert that atoms actually exist, is only possible because they vibrate, and in doing so they shed part of their structure, an activity which translates as detectable energy.)
Now you wonder why I say the universe 'is conscious of itself' and why this is so important. Let me give you a definition: Self-consciousness entails the ability to account for yourself to yourself. On a lower level, e.g. among fish or snakes, Consciousness entails the ability to discriminate between a self and a non-self. You'll need to distinguish clearly between these two. They are the fundamental modi of consciousness which organisms gain from the possession of nerves, for the simple reason that the organisation of nervous systems includes a capacity for evaluation: in simpler organisms to detect and evaluate sensations and perception; in complex organisms like humans the capacity for evaluating self-generated percepts (like words, symbols etc.). It follows that creatures without nerves cannot therefore have consciousness, although they must still be able to discriminate between what inside and outside of their body structure (technically this is referred to as metabolism and homeostasis); and that is indeed the basic condition of being alive.
Now I'm just coming to the point. For something to exist and for something to be are different criteria altogether. A bar of iron may be said (by a conscious creature!) to exist; but to be involves the consciousness I referred to above. In other words, to be entails the knowledge that I am.
Accordingly the matter structure of the universe, although it may in some abstract sense be acknowledged to exist, cannot be said to be. There is no agency with the power to render this existence conscious of itself. If I may formulate it in a paradox: the universe does not possess nerves, hence it cannot be conscious! So the need is for nerves to evolve. Now this, as you know, has occurred. It is pointless to deny it. Several hundred thousand species on earth managed that feat; and (as above) I consider it altogether plausible to assume that elsewhere in the universe, similar evolutionary paths are available on planets with suitable environmental conditions.
You'll appreciate from this that to call the universe 'dead' is merely a metaphor. Something cannot be dead unless it has been alive. And what is implied in what I said said, is that the universe appears, in virtue of its bias for structure, to also contain (via carbon atoms) a bias for the sort of structure that will eventually, in selected environments, evolve into conscious organic entities. On earth, we know that one species of such organisms evolved the type of self-consciousness which, by extension, allows us to postulate that the universe has 'cognisance of itself'. It does so, because we are its agents for this self-knowledge.
Now short of writing a book (I may well do so at some future stage!), I must leave this difficult concept to stand by itself. But I will leave you with some hints from two philosophers who thought along the same lines. Erigena, who lived over 1000 years ago, published a vision of God which can on one level be read as conveying the notion that God, in order to become conscious of himself, needed to disperse his spirit throughout his creation, that God is conscious of his own being through us. He was excommunicated for this outrageous idea; but in our context, you need do nothing more than replace the term 'God' with 'universe'. Erigena himself would probably agree that the two are the same, or two sides sides of the same concept.
More recently, the German philosopher Schopenhauer theorised that what we call 'Will' and 'Energy' are really a fundamental force of the universe, the principle of activity itself. Thus the universe constitutes itself by investing this force in matter (energy) and in organisms (will). I suspect many physicists, if only they knew of this idea, would find much to agree with. Again, of course, one may interpret this force as a constituting agency, a means for the universe to acquire both consciousness and being. (Schopenhauer, a committed atheist, would however deny any connection to God.)
This is a lifelong search, Michael; but I hope that my answer will give you a kind of starting block. With such 'deep questions' it is always difficult to know where to begin; I would love to think this will obviate a lot of unnecessary ransacking of a literature chockerblock full with figments and fancies. Just don't confuse what I write with 'facts'. I simply reflect what, with the best conscience I can muster, is scientifically tenable and philosophically acceptable.
I'm a philosophy graduate student from Algeria and I have many questions and ideas that make me worry, so I need you in order to help in order to get a good solution.
I'm very interested with the marriage a project and I hear many ideas about the criteria choosing of a future good wife. I heard a radio interview (Arabic service of the B.B.C.) with a professor specializing in "Family Sociology" that the difference of the speciality of the intellectual couple is very important for spiritual progress and stability, because it avoids routine (and anxiety if one of the partners feel that they are handicapped intellectually!). I would personally prefer a wife from the medical profession (a physician because I am interested in the integration between "Medicine and Philosophy" it is a title of an important Magazine from UK).
So, would you mind directing me toward a good choice because I believe in philosophical counselling or would you orient me toward a specialist or articles or websites that give me a sufficient remedy of my suffering in my present state of indecision?
Well this is certainly one of the strangest questions on a philosophy forum... you want marriage advice? Ok.
Read this: The Seven Principles for Making Marriage Work, by Gottman, John. This guy can evaluate a couple and with 90% accuracy tell whether their marriage will work, and he has recently developed a mathematical model that seems to predict this. Amazing, right? But as far as I can tell it is backed up with consensual, double-blinded, empirical data.
So, what is it he says? The essence is this: the better friends you are with your partner, yes, friends, as in, "wow I like to hang out with [him/her]", the better your marriage will be. That's it for the big secret. Now the question becomes, just what exactly is it to be a friend? And that involves, in essence, respecting and liking the person for themselves. I mean, it all sounds awfully trivial, doesn't it? But... all the stuff about "love", "attraction", "love at first sight", etc., etc.... not the way to go. Be a friend who enjoys their company, and you've got it.
Now, the hard part. How does a man learn to be friends with a woman... or vice versa? Most cultures do not teach this, and indeed actively discourage the kind of relating that would lead to learning it and to such friendships. And look at the problems that result. I don't know the answer to this, for any given person. It took me about 30 years before I figured it out (and yes, it was before I read this book) and a great deal of effort in learning how to see a woman as someone I could be friends with before getting sexually involved. But it got me a good marriage, finally. Good luck!
Steven Ravett Brown
Marriage is an important decision in most cultures, so asking advice is no reason to send you to a 'specialist'.
Mind that seeing marriage as a project is extremely rational. Seeing intelligence as main criterion for selection in case of marriage is quite rational too, and very limited. Instead I propose to use the word 'creativity' and see intelligence only as part of it.
During evolution humans have got quite clever in choosing a good partner. The trait called attraction was developed in billions of ages. I agree that this trait was developed for finding a good mating partner, so as to have successful children. But the trait considered as well the fit of the parents. A bad marriage mostly leads to traumatized children. Sociology only exists some 300 years, and is a rational type of sport.
So if you feel attracted to some woman (more than ONLY physically, but that counts too), then learn to trust that feeling. That means: forget about human made concepts like wealth, appearance, intelligence, etc.
How do you think wild animals (no offense) find the perfect partner? (science found out they really do). Just by trusting intuitively on their sense of attraction. As such animals are more effective than humans. Maybe not every rational criterion is bad, but don't forget about the natural ones.
Michael also asked:
I am not clear about what Jurgen Lawrenz calls 'instantiation of self'. How in Lawrenz's theory one can escape the problem of illusion?
It just occurred to me that I can give you a first rate example from 'real life' as a helpmate for understanding what I mean by Mind (Soul, Spirit) being 'instantiated' in a Self.
I leave undefined, as a matter which is neither scientifically nor philosophically ascertainable, whether this soul or mind or spirit therefore pre-exists or not. I'm cautiously inclined to answer this question in the negative, although it makes it more difficult. It would be easy enough, today, to accept Erigena's principle and suppose that the universe is infused with the spirit of God which seeks instantiation in humans. But my scientific research has persuaded me that the solution to this enigma is not as readily to hand. In the past, philosophers without science took the easy route of extrapolating from the human upon the divine dimension (and this includes eastern thinkers, Indian as well as Chinese and Arabic); but this is no longer feasible now that Kant has incontrovertibly shown the incompatibility not only of these dimensions with each other, but that our concept of infinity is deficient, for the same reason. One of the most important tasks facing philosophy is to work out an adequate concept of infinity. No philosopher known to me has even begun to tackle this.
In my philosophy the human creature is an animal (a mammal); take away the mind and you have simian showing minor somatic variations to chimpanzees. Accordingly the difference cannot be physical. Rather, it is a question that there is A BIAS in operation in the universe, which is easily seen in the fact that all matter in some way or another forms structures and that all the chemical elements have 'predilections' for assembling themselves. In a word, I repudiate the notion of 'chance'. Assembly may be undirected, but the bias sees to it that the 'chance' occurs. All of chemistry is devoted to the study of these biases, and human chemical engineers have discovered a numbers of artificial combinations that still work, but do not occur spontaneously: so here the human mind is introducing another BIAS. This is one reason why I say: The concept of unilateral illusion is itself an illusion. If we can 'interfere' with the spontaneous reality there is, then we must ipso facto have access to that reality.
Now the important criterion is this: that among that suite of naturally occurring chemical elements there is one, the carbon atom, whose BIAS is such as to give rise, under certain conditions (temperature, chemically suitable environment) yet altogether spontaneously and without coercion, to macro-molecules with the potential to transform into organisms. These entities (initially bacteria) possess, in turn, a BIAS to 'upgrade', to 'complexify'; and thus in the natural course of evolutionary passage, small communities which we call 'cells' arose as viable and independent living things.
At this point I'm going to jump over a few billion years of evolution. When the human being (or hominid) turns up some 25 million years ago, we find that this 'upgrading' has arrived at a truly mind-boggling complexity, not just relative to its body functions, but especially in relation to its nervous system. The important point here is this: that the human brain is made up entirely of a special variety of cells we call 'neurons'. Now it may be news to you that all these neurons are also organisms and accordingly individually alive. I have been amazed to discover that in this age of science and the universal distribution of knowledge, most people not a few scientists among them are so scientifically illiterate as to be unaware of this and instead believe that the brain is an 'instrument' or even a computer running software! In fact (let me stress this: IN FACT) the brain's 100 million plus neurons are a society all of their own, who make their living by building and working at the structures by which we experience sensations and perceptions, who live and feed, get sick and tired and eventually die just as we do.
It is these neurons who 'created' or 'invented', as a separate process, the mind. The conditions under which this occurred are unique to humans, but they are in-principle a potential or possibility of neuronal assemblies. So here is the point: that the Mind or Soul instantiated in a Self is a creative resource of the universe, coming into effect in biological matter of a suitable complexity of organisation.
In a strict sense, this potential or bias is already laid down in the very constitution of the carbon atom. If you like you may therefore (as a speculation) propose that 'God' (or by whatever name you wish to title ultimate BEING) 'seeded' the universe with carbon atoms in the 'foreknowledge' that in the natural, spontaneous course of its evolution, this universe would then give rise to creatures which could be endowed with the kind of self-consciousness that in turn enables the universe to attain to consciousness of its own being. And thus to continue speculating along one further step, this would imply that, just as we are self-conscious as a result of the combined non-conscious, yet sentitious work of microscopic organisms, so we humans abet, through our inhabitance of the imaginative dimension, the self-knowledge of the universe.
I need to emphasise here, that the last paragraph is evidently speculative; but the preceding are the facts that may conduce to this type of speculation. I could easily attach other scenarios and different speculations, as long as the facts, and especially biochemical and biological facts, are kept within sight.
From this you may deduce that I entertain rather stringent standards on what I consider to be admissable (metaphysical) speculations. It may serve as a guiding light to my repudiation of 'illusion' as a modus vivendi. To have validity, this concept needs a definition, and you will find on close examination that illusion on those terms cannot be defined without circularity. I suspect you may be inclined somewhat to eastern mysticism; I on the other hand see in it a necessary and indispensable stage in the growth of the human mind: a corrective to its (collective speaking) overweening ambition which, as we know only too well is apt to relapse from time to time into its infantile state. From one point of view, bearing in mind the addiction of millions of (exceedingly well-educated!) western people to flippant and frivolous beliefs, the eastern philosophers can be said to have grappled more seriously with the really fundamental issues, but that was a long time ago and since then they got stuck in this rut.. Its value for us today, if I may put it this way, lies in having brought the fragility of the mind to its own surface of consciousness; but of necessity we must go on and find our way, the 'golden road' which lies somewhere between the extreme materialism and the extreme transcendentalism that are still so characteristic of East and West.
I'm having trouble answering a big question. Maybe you can be of help. Here it is:
"Different cultures have different truths."
"A truth is that which can be accepted universally."
What are the implications for knowledge of agreeing with these opposing statements?
These 2 statement are examples of 2 opposing visions in philosophy, the relative one and the absolute one. Accepting one of these visions means that you chose for a camp, either that of Nietzsche, Wittgenstein, Kuhn (relativists) or that of Popper and his followers.
For what is worth: My very personal opinion is that Karl Popper (though I admire him) chose what has been since the since Enlightenment the dominant camp, but was nevertheless mistaken.
The relative camp SEEMS to have the future.
It is not clear from your question whether you are interested in the implications for knowledge of agreeing with each of these statements individually or collectively. I'm going to try to answer in a way that addresses both possibilities.
It is widely (but not universally) accepted in Philosophy that "knowledge" constitutes a justified belief in a true proposition where for our purpose here we can define a "proposition" as an assertion that says something that can be either true or false. So the implications of these two statements you have provided arise from their respective notions of what constitutes a "true proposition".
The two statements that trouble you present quite distinct and conflicting notions of "truth". But that is because they come from quite different conceptual realms. So it is not surprising that they appear to conflict when juxtaposed out of their natural habitats.
The first statement "Different cultures have different truths." This is a classic statement from cultural anthropology. Within that context, the meaning of the statement derives from two observations: (a) what makes an identifiable "culture" are the common beliefs shared by the people of that culture; and (b) what separates one culture from another, are the differences between the common beliefs of the two cultures. For there to be two cultures, there must (almost by definition) be two different sets of common beliefs shared by two different groups of people.
For our purpose here, let's define "a belief" (like a proposition defined above) as an assertion that says something that can be either true or false. What marks a cultural belief, then, is the acceptance of some assertion as true by all (or at least the great majority of) the people of that culture. This general acceptance can be (and often is) quite independent of whether the assertion corresponds to the facts of the matter, or is consistent (coherent) with the other beliefs of the people of that culture. It can even be independent of whether in fact anyone at all actually believes the assertion to be true. All that really counts is whether the great majority behaves as if they believe the assertion to be true.
Within the context of cultural anthropology, the statement in question is not an attempt to establish a definition of "truth". Nor is it an attempt to claim that the notion of "truth" is culturally relative. It is instead a bit of poetic license used to express the fact that different cultures believe in different collections of fundamental assertions about their culture and their world. It is a description of what people believe to be true, rather than a statement about what is actually true or what is actually knowledge.
To take this statement out of its cultural anthropology context is to dip into a school of philosophical thought usually referred to as "Cultural Relativism" (for obvious reasons). Within this wider context, the statement would have to be interpreted as both a definition of "truth", and a claim that the notion of "truth" (and thus "knowledge") is culturally relative. Within Cultural Relativism, a belief is considered to be "true" if it is widely believed to be true within the relevant culture. Since beliefs differ between cultures, as documented by cultural anthropology, "truths" must necessarily differ between cultures.
(Cultural Relativism is more widely maintained as a system of Ethics than as a treatment of truth and knowledge. In Ethics, Cultural Relativism maintains that what is "good" and "right" is defined by the common beliefs of the culture as to what ought to be considered "good" and "right".)
The second statement "A truth is that which can be accepted universally." Taken at face value, the statement is a straight definition of "truth". It establishes the criteria that determine whether or not some assertion is to be considered true. Whatever the assertion is, if it can be accepted universally, then it is to be considered true. Unlike the cultural anthropology context of the first statement, this definition of "truth" does not require actual acceptance by anyone. It requires only that such acceptance is possible, and makes no reference to how unlikely that possibility might be. Unlike the Correspondence Theory of "truth", it does not reference the actual facts of the matter. And unlike the Coherence Theory of "truth", it does not concern itself with the consistency of beliefs.
Consider an assertion such as "Unicorns exist" or "Fairies dance under the moonlight at the bottom of my garden". Certainly it is thinkable that these two assertions could be accepted universally independently of whether unicorns or fairies exist or not; independently of whether a belief in the existence of unicorns or fairies is consistent with other beliefs held to be true; and independently of whether there actually is universal acceptance of these assertions or not. Therefore, each of these assertions would have to be regarded as "a truth". Clearly this is not a reasonable approach to a general meaning of "truth". And clearly, this notion of "truth" is inconsistent with notions expressed in either the cultural anthropology or Cultural Relativism contexts of the first statement. So we must assume that there is a hidden context behind this statement that has been lost in transmission.
If truth is determined by the cultural acceptance of the assertion as true, then you "know" any assertion that you believe to be true, and that you have cause to believe is generally accepted as true within your culture. Alternatively, if truth is determined by the possibility of universal acceptance of the assertion as true, then you "know" any assertion that you believe to be true, and that you have cause to believe could possibly be universally accepted as true.
Note that in both these cases, there is no reference to the actual facts of the matter, and no reference to the consistency between one assertion of knowledge and another. Thus, it would be perfectly feasible for you to "know" both that "Unicorns exist" and that "Unicorns do not exist". This is not how people normally think of knowledge they consider whether they "know" something.
In the absence of any context for the second statement, there are a number of ways to reinterpret it so that it makes a little more sense. We could, for example, draw upon the cultural context of the first statement and reinterpret the meaning of "universally" in the second to mean "universally within a culture". This reinterpretation would at least make the two statements consistent.
Another reinterpretation would be to understand "that which can be accepted universally" to mean "that for which there is justification that all rational people would accept if they were aware of it". This would incorporate the notion of justification critical to the concept of "knowledge" we are employing here. It would also eliminate the unlikely but remotely feasible possibilities opened up by the use of "can". On the other hand, without some contextual reason for this reinterpretation, it is certainly stretching the use of English to find this meaning in the words provided.
I'll leave you with the question of whether or not either the Cultural Relativist or the universal acceptance notion of "knowledge" and "truth" is consistent with how you employ those notions. I know for me, neither is reasonable. Personally, I subscribe to the Correspondence Theory of Truth (wherein an assertion is true just in case it accurately describes the facts of the matter). I find, therefore, that both of these statements are philosophically incorrect, although they may certainly possess poetic meaning within some special contexts (such as cultural anthropology).
Michael also asked:
Suppose we are sitting together talking and I produce a living rabbit. Then I cut the rabbit in half, right down the middle. Now we look at both halves of the rabbit and I ask you, "Now where is the rabbit?" Further suppose you decide to answer me, "There is no rabbit, only 2 half rabbits." Next I produce another rabbit and this time I cut off exactly on fourth of the rabbit. I produce a third rabbit cutting off exactly one eighth and ask the same question and I get the same answer. Finally, with one rabbit I just trim the end of one toe nail and I point at the rabbit and the piece of toenail that I have removed and again ask, "Where is the rabbit?'
This experiment makes it clear to me that what I am commonly calling a rabbit is a completely arbitrary definition of something that, in fact, never exists. The actual thing I am referring to when I say "rabbit" is just a mental image that does not actually correspond to anything of a real nature.
Suppose a coyote is eating a living rabbit. As we watch the rabbit eventually stops struggling and the coyote devours the rabbit, piece by piece until everything has been consumed. When did the rabbit go? At what point in this process did the rabbit cease to be? When the rabbit pieces are in the bowels of the coyote they are digested into smaller and smaller pieces until finally they are decomposed into their chemical constituents, absorbed and incorporated into the tissues of the coyote. It must be clear that any choice we make about when the rabbit is and when it ceases to be is completely arbitrary. Furthermore, what was once rabbit has become coyote. When it is one thing and when it becomes the other is again completely arbitrary. Any choice that we make has no relationship to the actual identity of the thing from the point of view of the Universe.
From the point of view of the Natural Order what we are calling a coyote and a rabbit are just porous bags of molecules, sacks of energy wrapped by the sheerest gossamer netting. And these bags or sacks may come close to each other and then move farther apart, at times commingling so intimately that they seem to be one. But it is always a matter of distance, sometimes very short, sometimes farther apart. It is always a continuum with no intrinsic borders, limits or boundaries. This demonstrates clearly that there are no individual entities, only relative concentrations of energy coming and going with extreme dynamism.
It becomes clear that from the point of view of the Universe there are no entities only actions, without entities that do the acting. Second, any actions of ours that arise from an idea of self, where self is different from some other, are actions based upon an illusion.
The examples address the issue of Entities, whether organismic or not. This separates in my mind the issue of life versus death from the issue of whether the Self is an illusion because there are no Entities.
What you're trying to tell me is that the universe is a seamless continuum of matter in which what seem to us to be entities are merely local concentrations. This is an idea due to Heraclitus and Parmenides, who lived about 2500 years ago. Another version of the same idea was taught by Schopenhauer. Certain theories in modern particle physics permit that interpretation. So you see the notion is both very old and tenacious.
However, to come to grips with the concept of entities and whether or not the notion of a 'self' relies on it and therefore is an illusion, I will state the theory in your language:
The self is a local concentration ("focus") of a species "C" of the matter/energy continuum ("m/ec") generated by a prior focus of m/ec of species "B" which in turn is the outcome of a preceding focus of m/ec of species "M". To explain:
Species "M" or "matter" is characterised by spontaneous, repetitive, mechanical, predictable and entropic congregation. This species (excluding any possible vacuum) accounts for more than 99.9% of the volume of the universe. If an intelligence was provided with the initial atomic configuration of this continuum, he would in principle be able to calculate the entire history of the universe, atom by atom, through to its end, irrespective of any trends towards local concentrations.
Species "B" ("bio-organisms') is characterised by spontaneous, erratic, non-mechanical and anentropic congregation. The same intelligence, if provided with the configuration of any congregation whatever would be unable to predict at any instant in time what will occur at any later time, except in some sub-species regarded as mass phenomena. The source of the erratic 'behaviour' is non-normative chemical assembly resulting in an integrated work cycle ("iwc"). Another name for iwc is 'metabolism', yet another 'homeostasis'. This species of the m/ec accounts for less than 0.1% of the volume of the universe.
Species "C" is characterised by non-perceivability, for although certain concentrations of the electro-magnetic spectrum are measurable, they attend but do not comprise Species "C". Moreover "C" displays a property of unknown and unperceivable composition which may be surmised to be responsible for firstly the erratic non-computable trends, secondly the anentropic (non-dissipatory) coherence and thirdly the self-referential capability of "B". The coherence as icw's confers on these focuses the status of "entity"; the self-referential attribute is commonly referred to as "[self-]consciousness".
It is, however, an outcome of the existence of "C", notwithstanding that it is undetectable by objective methods of assay, to confer on some members of "B" the aforementioned attribute of self-referential consciousness. Accordingly the universe ipso facto contains focuses of consciousness conferring on the universe itself the selfsame capacity within those local concentrations.
It is unknown whether or not these concentrations are dispersed across many localities of the universe; it is also unknown whether they are subject to evolutionary development. Known traces of these focuses account for an unknown percentage of the volume of the universe. It is a legitimate conjecture that these traces occupy 0% of the volume of the universe.
Two deductions ensue from this analysis. Firstly: that although the last-named property of some species in the universe account for 0% of m/ec, they comprise the only portion of local concentrations where the universe may be said to hold a form of awareness of its own being. Secondly, since these attributes are neither detectable nor manifest in any way whatever among any concentrations of "M", the universe would if it lacked species "C" altogether, not be referentially cognisant of itself and accordingly no avenue would be available to declare that it exists. Accordingly of a universe solely comprised of "C" it may indiscriminately be said "it exists" or "it does not exist". The statements would have identical meaning.
Reverting to common language: you can now deduce from the above (1) that "self" is not an entity but a property of some entities; and (2) that entities exist.
It may help to note that entities are distinguishable from objects; indeed the term 'object' is superfluous in this theory. Observe that your 'experiments' did not adequately differentiate these two types of congregation of m/ec.
As a purely speculative aside, let me say that there is no good reason for believing that science has cognisance of anything more than an infinitesimally small finite segment of the universe. The above theory, to which I am not unsympathetic, may then invite consideration of the possibility that the entropical drift (i.e. what stands behind big bang and big crunch theories) is just an effect suggested by observable phenomena, whereas non-observable trends, e.g. the evolution of consciousness, may be taking place at the same time, though unperceived. The universe may be in process not of burning itself to a cinder, but of converting itself to a "thought", and even this is an old idea, mooted in 1930 by James Jeans.
Finally, it is plain from the above that the notion of 'illusion' would entail a circular definition; and from this it follows that any tenable concept of 'illusion' can only be superadded as a special instance or incidental feature of the operation of 'C'.
Is eating people wrong? Why?
This answer comes a bit late in the piece (it belongs to Answers 20), but I've had the benefit now of reading what previous respondents have had to say, and my tuppence worth of wisdom may still not be amiss in the context.
The question, in the end, has two dimensions to it:
1. People are organisms and thus distinguished from the 'dead matter universe' in certain ways.
2. People are self-conscious agents (in the old but by no means redundant terminology: people possess a soul) and thus are distinguished from all other organisms in certain ways.
Re Point 1. The fact that such questions can arise in the first instance is plainly based on the fact that every organism on this planet is food for some other organism. This is one feature by which the organic realm is distinguished from the inorganic.
Viewed strictly from the 'food chain' point of view, humans are food for lions and sharks (not to mention fleas, lice, bacteria and whatnot): well, why not for other humans? No logical argument is capable of resolving this issue in a morally responsible way. I put it this way to make the elementary point that notwithstanding Kant, the concept of a person (under this present Point 1) is not relevant in the context: humans are mammals. To an outer space visitor, looking for sources of organic food of the flesh variety, we would be as welcome as cows and sheep. Moreover there are human industries today which operate on precisely the same assumption: that the concept of a person is a useless addition to the concept of mammal. Research into artificial intelligence, cloning and a few others could not exist unless there was a covert belief (even if it remains unacknowledged) that ultimately the specific human-mammalian characteristics such as intelligence are portable, replaceable, reproducible and mechanisable. Whether or not this belief rests on a fallacy is not, in present context, an issue. The fact that such industries exist, consume billion-dollar funding grants and operate under the aforenamed intellectual conditions, speaks for itself.
This leaves us with Point 2 needing to come to the rescue. Paradoxically, a good way to start would be with the observation that none of the other organisms on earth have devised artificial means designed to replace themselves!
This is not as frivolous an observation as it may sound, but I'll stick to the essentials of the dimension which apply here. The concept of a person relies on a feature unique to humans in the organic realm, often called a 'soul', but 'mind' or 'spirit' will do equally well. Crucial to the concept of person is a recognition that the features identified by the term soul' point beyond an immediately comprehensible factual domain, even beyond the capacity of humans to truly understand what they mean by such a concept. This implies the possibility that the human animal is a participant, possibly the first participant, in an evolutionary potential of the universe that is not governed by criteria of objectivity such as we apply standardly to its study. In ages gone by humans have recognised this potential in various ways by accepting the existence of a creative God, who is at the same time the owner' of our soul and likely at some temporal juncture to reclaim' it and pipe it into his infinite habitat. That's not such a stupid idea; and I have often wondered why and how we modern, well-educated and scientifically alert denizens of the world want to reduce' this dynamic concept back to its poverty-stricken materialistic sticks-and-stones model. However, be that as it may, the concept of a person which derives from this ancient belief system does have the relevance that it is accompanied by a notion of an individually responsible soul inhabiting an indifferently (chance) selected human body, but and this is the crucial argument since that body now functions as the vessel for the nurturing and development of that soul, it is a criminal act against God to kill that body. And eating entails, necessarily, killing.
It should not be difficult, even for an atheist, to accept the embarrassed locution emergent property' (what property?) as a scientific substitute for the concept of soul' or mind'. In any case, the concept of emergent property' in itself implies uniqueness, it accepts by default that the result is an individual. What is lacking, however, is a notion (as in soul') of the sanctity of that property, why indeed it should be regarded as anything special at all.
So in our own dishevelled way, we cling to the ill-defined and untenable notion of a person', we seek explanations in social, environmental and evolutionary conditions for a moral definition, but we make no effort (scientifically) to retain the indispensable feature I have called sanctity'. I'm not advocating a religious point of view here; in my book sanctity' is a human concept. But in saying human I am already making a distinction-of-uniqueness, I am already acknowledging that something is different, if only I knew what it is!
What kind of conclusion can we reach from this? Firstly, that mind' or soul' are characteristics of unknown constitution and unknown purpose. Secondly, that one effect of possession of these characteristics is that their owner put questions abroad like, 'is it immoral to eat humans?' Thirdly, that in thinking about these problems, we limit and circumscribe our research effort by the application of inappropriate criteria (demanding that e.g. a soul be a thing with determinable thingness). And finally, that we inveterately persist in repudiating the genuine value of non-scientific ideas devoted to such research, while all the time most of us still feel' that the road to a real answer must lie in some such direction, not in an exclusive reliance on reductive methodology.
Somewhere recently I came across a book on bioaesthetic' principle, where the term making special' was put forward as an essential human characteristic in the transformation of banal objects and activities, frequently in context of religious ceremonial. It is nothing other, as you'll recognise, as the sanctity' I mentioned above. It is an inalienable prerogative of human to enact such transformations, and we will continue to fail in our endeavours to understand what a human being is for as long as we ignore the reality and indeed uniqueness of this faculty. For without it, when confronted with such an easy question as our questioner put to us, we have no leg to stand on.
I believe most of the questions we ask to ourselves are "irrelevant". When I say "Does anybody know why we are living?" what I am doing is just mixing up wrong ingredients to make a cake. And believe that the cake I made is really a cake!
Let me give an example to elaborate on this. Think of a machine that can make up questions by selecting words from a categoried list and putting them together. One question it might come up with by putting random words besides each other is: "why", "wood", "sing". Makes no sense huh?
We use a pretty good intelligence in eliminating such questions, however when we come to questions like "Does anybody know what we are living for?" we do "believe" that this is a relevant question and there should be an answer to it. Hence we search for one.
Let me turn back to the "Why wood sing?" question. After coming up a question like the above, our machine gets at certain possible answers to it using some rules until an algorithm feedbacks that a satisfactory answer is found, during its trials.
I call this the "fit". It is some statement that we come up with after a mind exercise, and that pushes us to a certain anxiety level which we associate with the occurrence we label as "finding a solution".
"Because wood burns" might be an answer that our machine finds. Though it does not make sense to us, what matters is whether the answer obeys the rules of the "fit" conditions. If we define "fit" as: if you can make a statement that would relate the attributes of wood and singing in "any way" then your statement is accepted, you can believe that it is a right answer, our machine thinks that it came up with a right answer.
Are there any articles, books that you might lead me revolving around these ideas?
Because you asked for references, I am going to start with this quote from Wittgenstein. From what you have said, I think you might see its relevance:
"The Earth has existed for millions of years" makes clearer sense than "The Earth has existed in the last five minutes". For I should ask anyone who asserted the latter: "What observations does this proposition refer to; and what observations would count against it?" whereas I know what ideas and observations the former proposition goes with.
"A new-born child has no teeth." "A goose has no teeth." "A rose has no teeth." This last at any rate one would like to say is obviously true! It is even surer than that a goose has none. And yet it is none so clear. For where should a rose's teeth have been? The goose has none in its jaw. And neither, of course, has it any in its wings; but no one means that when he says it has no teeth. Why, suppose one were to say: the cow chews its food and then dungs the rose with it, so the rose has teeth in the mouth of a beast. This would not be absurd, because one has no notion in advance where to look for teeth in a rose.
Ludwig Wittgenstein Philosophical Investigations pp.221-222
Here's a recipe for a cake: Take two squirts of liquid detergent, a cup of flour, a dollop of tomato ketchup and a large packet of salt. Put the mixture in a baking tin and leave out in the sun for four hours.
No? Why isn't that a cake? It is a 'cake' that a child might bake. A make-believe cake. Human beings might not find it edible but then again you never know ET might find the 'cake' delicious. Then again, in what sense could something be 'cake' for an alien being? We have to contrive a sense.
Not every sequence of words that sounds like a question is a question. Sometimes it's just obvious that the 'question' is not a real question, and sometimes it isn't so obvious. But what is a 'real question'? If someone utters a sequence of words that sounds like a question, and then someone else comes up with an answer that satisfies us, reduces our anxiety level or whatever, doesn't that prove that the question was a real question?
Now we're right on the edge of the precipice. Because if one accepts that, then it seems that philosophy is reduced to a trivial game.
Let's get back to the rose. In what sense is 'In the mouth of a cow' an answer to 'Where are a rose's teeth?' There's a 'fit' there, you can see the point. But it wouldn't even make a good riddle. The same is true of 'Why does wood sing?', 'Because of the whistling sound it makes when it burns.' What is characteristic of questions posed in genuine riddles is that while many possible answers might 'fit' one way or another, some particular answer impresses us as being the right answer. Riddles have a solution.
Here's a riddle from a Christmas cracker. 'Which flowers like to kiss?' 'Tulips.' Suppose someone asked you this and you thought, 'Orchids.' Why? 'Because of the "kissing" movement the orchid makes when the bee enters it.' The answer fits, but it won't do. On the other hand, there might be another 'right' answer to the question which flowers like to kiss (see if you can find one) so it isn't necessarily a matter of uniqueness. The distinction between the answer that merely 'fits' the riddle and the answer that 'solves' it might be difficult to define in the abstract, even though we intuitively grasp the difference. First, one has to ask what makes a riddle, which I suspect is almost as hard as defining a 'joke'.
In philosophy, 'solutions' don't come easily, but one recognizes the difference between answers that you can make a rational case for, and answers that merely 'fit' in some looser way. Similarly, it is a matter of philosophic judgement, not any set of fixed rules or precepts, let alone a universal theory of philosophical questions, that decides whether a question like your example of 'Why are we living?' is worth taking the trouble to answer.
You're in good company, already Nietzsche and Wittgenstein considered most questions as nonsense, triggered by a wrong use of language. Even most questions that were bothering philosophers at their time.
You define the mechanism 'fit'. It supposes inherently that truth is recognized by the emotion associated with a 'correct' answer. But what does such an emotion tell me about your opinion of truth? Does this emotion depend on the system of thought used (relative), or not (absolute)?
Machines only execute commands so there must be rules leading to your emotion. Try to make these explicit.
To recognize Truth it takes an opinion about it. That is an essential question that still divides the philosophical community.
The only thing that I can think of that is relevant to what you're asking is the late Wittgenstein (and of course his students). Try:
Wittgenstein, L. The Blue and Brown Books. New York, NY: Harper & Row, 1965.
Wittgenstein, L. Philosophical Investigations. Edited by G. E. M. Anscombe. 3rd ed. New York, NY: Macmillan Publishing Co., 1968.
Wittgenstein, L. Remarks on the Philosophy of Psychology. Vol. 1. Translated by G. E. M. Anscombe. Edited by G. E. M. Anscombe and G. H. von Wright. Vol. I. Chicago, IL: The University of Chicago Press, 1988.
Wittgenstein, L. Remarks on the Philosophy of Psychology. Vol. 2. Translated by G. E. M. Anscombe. Edited by G. E. M. Anscombe and G. H. von Wright. Vol. II. Chicago, IL: The University of Chicago Press, 1988.
Steven Ravett Brown
I am not a scientist, I am 27 years old and I have an idea in relation to time. It is not a subject which has consumed much of my "time", but for some reason I feel the need to confirm or deny my beginning of an idea and to know if it is (and I strongly presume it is not) a good basis for a theory.
Assuming time is unique to each individual. Could not each person have their own timeline i.e.. from birth through life your actual timeline remains a constant. You move along your timeline at your speed. The speed you progress along it is set it cannot be altered, you can neither move slower or faster. This reinforces the theory that "now" does not exist because each individual has their own timeline, also because of this, the universe would continue to evolve without sentient beings, if one person's timeline is omitted other timelines would not be effected. This also runs with nature's natural cycles (for a crude example,the earth will evolve and the end of its natural cycle is collision with the sun this is the earths timeline).
Every living object has their own timeline, the only objects which do not have a timeline (or natural cycle) are those created by man.
Each person moves through space along their timeline at their speed (time).It is therefore not possible to move back or forward along your own timeline, but is is possible to move to other people's timelines at any point. Events and future scenarios are always there, they are the background/ the "space". Your timeline can alter direction to move through events which it would not normally cross but overall events/ scenarios "float"/ pass across yours and everybody else's timeline and are therefore not totally predictable but the number of events crossing your timeline at any point can be numerous.
Despite actions/ future actions it must be remembered that no two individual timelines ever cross. If lines are crossed then maybe this is time travel.
Time is also therefore not universal, and to use the experiment with the atomic clocks as an example, this can not be related to time travel because each person's timeline in the experiment has remained the same, you have not changed their timeline in any way or the speed they travel along it.
Is this new idea (does it make sense!)or can you point me to other similar theories. For some reason to me it seems to work and explains a lot of paradoxes about time or backs up other theories i.e. absolute conception of space.
I am a layman and would be interested in knowing more about the paradoxes and theories to see if this idea can be extrapolated to provide answers.
I'm answering this because you seem in earnest.
Um... you're wrong.
No this is not a new idea. No it does not make sense.
That's the summary. I'll highlight a few points. For one thing, think about this: time is the basis of movement, right? That is, an object or whatever traverses a distance in a time interval, and so it moves... that's what movement is, right? Now, a) how then do you "move through" time; b) how does "time move"??? That doesn't work, if you think about it.
A "timeline". Ok, now just what is that, besides a spatial metaphor for a particular comprehension of time? Yes, yes... maybe you've read about "timelines" in relativity, in sci-fi, or whatever. A timeline in relativity comes from a very particular analysis of time and motion, which does not fit with what you're saying. Anything else is just metaphor.
Ok... "every living object has their timeline"... you're confusing at least two concepts of time here. One has to do with events taking place "in time"; another has to do with the passage of time (which by the way doesn't really mean anything... another metaphor). "Events" means something... but I don't think you know what that is, because no one else does either. Or to put it another way, there are lots of theories out there about what an "event" is. The "passage of time" is yet another, yes, spatial metaphor for time that we use to comprehend certain aspects of it.
There's just too much confusion for me to go further... you keep taking metaphors as if they are reality. "Timelines", "scenarios", "backward", "forward"...
Look... before you do anything else on this subject, read these books:
Lakoff, G. Women, Fire, and Dangerous Things. 2nd ed. Chicago, IL: The University of Chicago Press, 1990.
Lakoff, G., and M. Johnson. Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. 1st ed. New York, NY: Basic Books, 1999.
Steven Ravett Brown
I want to write a book based on philosophy. I don't want to be preaching my ideas, I want to explain a philosophical dilemma or question that is a subject of debate, and then put my opinion forward, but show how it relates to other opinions and what parts of those opinions I agree with and why.
Do you have any advice on how I should consider setting it out, do you think it would be best to just tackle a few questions that are related to each other? Do you think I should set it out as a story, or somewhere in between a novel and a textbook (I don't want it to be like a textbook or a revision guide!) Also are there any books I should read to research?
Another of those oldie questions down at the bottom of the list... I guess Eve has given up on this... but... for some reason this one intrigued me...
Ok. On the face of it, without another 10-20 years of education, what you want is simply ridiculous. But then I started thinking about Colin Wilson and people like him. So... take a look at some of the early novels of Colin Wilson: Adrift in Soho, The World of Violence, Ritual in the Dark... neato stuff, which he wrote in his 20s. Try something like that, if you're able.
Steven Ravett Brown
Advice is cheap, as the old saying goes; and in relation to your question, the difficulty is that you may not be altogether aware of how big a problem area you're tackling there. However, there are fundamentally two ways of attacking this issue:
Firstly, you can study works by writers who have done this sort of thing before; and obviously you would chose those whose writings are truly philosophical, not just argumentative. For example, Sartre's Age of Reason and Camus' The Outsider are in a loose definition "philosophy set in motion" in a fictional environment. You might use those as role models: read them, and then go to the secondary literature to get yourself directed to the portions of their more formal philosophical writing where the same issues are dealt with.
Other novelistic examples are found in many writers who are not specifically philosophers; I might mention Aldous Huxley and Thomas Mann as a conspicuous examples; but here as there you need to be well-informed of their philosophical backgrounds to derive profit. And you really cannot go past the classical examples of Voltaire's Candide and Bacon's New Atlantis.
Then there are writers, very few, whose novels are truly philosophical in an authentic sense, written by men who were philosophers but never actually wrote a philosophical text, just novels. Dostoyevsky's major novels belong into this class; and I would go so far as to say that no-one can claim to be philosophically comprehensively educated without having read at least The Devils and The Brothers Karamazov. Joseph Conrad's Nostronomo and Stendhal's The Chartreuse of Parma might also be said to make the grade, and there are a few others, although by now you might find yourself with a major reading list to tackle. So perhaps you might alternatively consider Option 2, which assumes that you possess reasonable literary talent and especially a facility for writing convincing dialogue.
An obvious starting point would then be to select one or several connected philosophical topics, say "good and evil" or "the concept of justice" or some topic in fashion today that motivates you. Read what philosophers have written about them, pro and con, and then put up a few characters whom you'll have to portray as "embodiments" of opposing trends. The more complex these characters, the more convincing they are likely to be. I mean by this: ensure that A is not just evil, but has a streak of unexpected and eloquent compassion about him/ her (for example). The ideas you set in motion, being exemplified by persons, must not be "monolithic", but shade in and out contingent on circumstances, events, loves, hates, politics etc. The best example of this sort of thing is again Dostoyevsky, and you could do much worse (if you're serious) than to read his Devils five or six times and attempt to tabulate the characteristics of the main characters and how, why and under what circumstances they come out, what conflicts cause their characteristics to change, crack under pressure, or become modified in one or the other direction.
I must not fail to mention, finally, Plato. Several of his dialogues are the best models ever written. Especially pertinent in your context would be Protagoras, but Symposium and Republic are equally brilliant, though more complex and extended.
Well now: It remains for me, I suppose, to wish you the best of luck and happiness in your endeavour; and I do expect a mention among your "influences" when you pick up your first Nobel!
My question has to do with the number of humans in the world versus the number of non-human sentient animals. It requires a rather lengthy setup, so bear with me.
Imagine that you could somehow catalog every fertilized egg cell (zygote) that would one day grow to be a sentient animal. That is, pretend that you had a computer database describing every zygote that ever existed, from that which would become a prehistoric mosquito to one that would become Albert Einstein. All would eventually mature into a fully conscious being. The database would contain all kinds of information about the zygote, but you would primarily be interested in the species of the animal that it would one day become. The reason that the database contains information about only the zygote and not the animal itself is to underscore the fact that we all start out as such, no matter how big we become.
Now imagine that you query your database for the total number of homo sapiens egg cells that have ever existed. Nobody knows what that number would be, but let us say that it is roughly 15 billion. Now query your database for the total number of all of the zygotes. Obviously, that number would be much much higher. The number of insects alone would be extremely high. There are 200 million insects alive for every human on earth (I got that from Hollywood Squares). Multiply that by the number of years that insects have existed, and you obviously have a very large number.
Divide the number of humans by the total number of animals, and the result is an extremely small number. It would seem that this number represents the probability of being a member of homo sapiens, given that you are any sort of sentient animal having been born sometime in the past. Let us conservatively estimate that number to be one in 10 trillion.
Now let us take a closer look at one in 10 trillion. An event having that probability would be EXTREMELY unlikely to occur. In fact, I would even describe it as being "freakishly improbable".
The average lottery player would be much more likely to win the jackpot twice during her or his lifetime. I do not know about you, but I have never won the lottery once let alone twice. It would seem that, by having been born human, you and I have won the jackpot in the lottery of life. Twice! I cannot accept this as true. "Freakishly improbable" events just do not happen to you or me!
Now, the following thought comes to mind:
Do the lower animals such as insects really possess a rudimentary consciousness?
Most researchers in the field say yes. Insects and other such animals do indeed possess a rudimentary consciousness. They are sentient. They are not like zombies, sensing and reacting to things without having an awareness of them. They have consciousness.
Thus, human consciousness is an extremely rare thing on our planet.
Here is my question:
Are we humans really the unlikely winners in the lottery of life, or is there some other explanation? Could the probability of being human actually be much greater than I have assumed?
The problem with your arithmetic exercise is that numbers as such have no bearing at all on the situation you are portraying. In calling the likelihood of a zygote to be a human rather than a mosquito 'freakish', you forget that the atoms in the universe that might or might not become part of a living thing have even more freakish improbability to account for.
Forget numbers and probabilities and look at structures. Everything in the universe has structure, and thus the only pertinent type of argument along your lines of thought is to consider whether one or another type of entity is structurally probable (or possible), i.e. whether it makes sense for some type of structure to exist and how and where it does. In such a context a spiral nebula is easily explained by gravitational forces; and the entropy that eventually results is equally plausible. The context of your question is biochemical and biological, however, which entails a different slant or perspective.
Let me get one thing out of the way quickly. What kind of species possess consciousness or not we can scarcely be certain about, but a default theory is that nerves are the absolute minimum. Now consider that nerves are themselves living things! Are they conscious of their own consciousness, or of ours, or of none? You see (I hope) what a futile theory this is! Consider further, just for now, that a great deal of this theorising is concerned with our place in the sun, and that the pendulum tends to swing rather wildly between theorists who want to convince us that we're just overgrown apes and should, perhaps, dismantle civilisation and return to the trees, and those who believe that we have a destiny to manage this planet before somehow we get ourselves installed as managers of the solar system or even the whole galaxy. Well, I can tell you I'm not on the side of the former, even if I reserve the right to remain sceptical about the latter.
Now: In logic (or in the lottery scenario) there was no compulsive reason for bacteria ever to grow into anything bigger or more complex, seeing that they were (and are) perfectly adapted to survive just about any calamity short of this planet physically blowing up. Yet it is a plain fact that 'upgrading' is a also a kind of 'default programme' for organisms: that's what the theory of evolution is all about. A deep problem in that theory is, however, that they are also lured down the path of numerological speculation and thus keep stringently to the mechanical doctrine of genetic accident which is no answer at all, but a simple causal argument that leaves you looking for more causes right through to infinite regress.
I can't write a book here, although that's what is really necessary to answer you in depth. All I can do give you in one paragraph what my conclusions are, and then a couple of titles for you to pursue the matter on your own bat.
Carbon atoms are capable of forming polymeric chains of immense length and infinite flexibility. They are the only atom so endowed. Now this suggests that carbonaceous structures will be different in kind from all other molecules, as indeed they are. A specific type of this kind of molecule, called a macromolecule (i.e. giant molecule) or polymer, given certain temperature conditions, such as prevail on earth, has the capacity of 'turning itself over' in such a way as to construct an integrated work cycle without external mechanical push-and-shove. This is difficult for us to come to grips with: we are still enthralled to the cartesian division between mind and matter and therefore prone to seek explanations for all structures, including biological ones, by the route of reductionism. This doesn't work in the present scenario. However, the point to be made is that these latter structures, which we call 'bacteria', are obviously possible, and given the conditions named, altogether probable. Most biologists would tell you that the chance of some such form of life arising on any earth-like body in the universe is quite high. This leaves only the last item to be explained, why do they upgrade?
Now this is where numbers and probabilities get stuck. Remember me saying that it seems not to be compulsive for sheer survival. This part of the conclusion is therefore totally interimistic. On earth, it occurred because the atmosphere changed about 2.5 billion years ago to a level of toxicity that was fatal to microbial survival (I'm talking about the air we breathe!): and it is from this point onward that 'upgrading' sets in. Organisms needed to find a way to detoxify the air; this meant devising respiratory structures, therefore necessarily an increase in body size and complexity. This process has never stopped, but greater size and complexity impair survivability in other ways, as will be apparent without me spelling it out. But it explains why to every billion bacteria, there might be 1 million mozzies, 10 cats and 1 human. But all these structures are implicit from the moment that bacteria were compelled to follow this path of proliferating organic forms.
Ok, all this is rudimentary. My point is merely that under certain circumstances, as demonstrated by historical developments which we can easily trace back, this 'upgrading' was already latent in the first biological polymer when it came into existence. Whether such a path is logical may be another matter. Whether a human being would arise necessarily in the course of such evolutionary pathways I'm inclined to doubt too many other contingent factors might intervene and human are not necessarily the goal' of these developments. There may no goal at all, of course; or if you're a believer, you might say that human are the outcome of a directed evolutionary path. One way or another, however, human-like creatures may not evolve on other planets, even though there is no logical argument why they should not. What happened here cannot be denied to be possible elsewhere, since as a possibility it has already occurred. Finally, the argument for 'upgrading', which includes, eventually, nerves and therefore consciousness, is pretty plausible too. There are (even if I deny it to mozzies and fleas) enough species on earth with nerves to admit that consciousness is an almost inevitable adjunct to young upwardly mobile creatures.
Humans? Well, put the question the other way: why dogs and cats and horses? All these are, to some extent, contingent developments. Human may have development without dogs, cats and horses. We would not have evolved, however, without the species mammals' making the grade. So there is a certain hierarchy which cannot be ignored. When one talks about structures, as I did, it is incumbent to remain aware of the priority of some types of structures before others.
But: the 'movie' of life's evolution on this earth is not a path, it's a bush with thousands and millions of branching points at every juncture. It's likely to be exactly the same, but different in detail, on any other life-bearing planet. So I would expect that some structure analogous to earthly life forms to arise wherever the possibility exists; I would expect (as indeed most of us do from sheer habit) that if there are mammals, they would also have biaxially symmetrical body forms, because these things don't happen like that just on a whim; but I would expect that their details might be very, very different. For example, four different types of eye have evolved on earth, and the lobster's eye is so different from ours as you wouldn't want to believe (it's made up of little pixel-like rectangles!). Moreover I am of the belief that, because the mind is a thing with structure, that aliens endowed with a mind would also show a similar structure of thinking to ours, though obviously different again in detail (namely in the influence which the environment would have on their intuitions). However, that's enough speculations; but I hope you get the point that it's not numbers, but structures which are important. Let me add, as a joke, that the probability of life to arise on earth (mathematically calculated on the chance of certain amino acids to join up by chance) is 0.0000000000001 or even less: the universe isn't old enough to see it happen. So: structure and complicity to the rescue!
Something you might like to read: Stuart Kauffman: At Home in the Universe; Graeme Cairns-Smith: Seven Clues to the Origin of Life; Ian Stewart and Jack Cohen: Figments of Reality. Now these books are fun; and since they were written by scientists working at the coal face of research, you can rest assured that they don't push fancy theories without some hard evidence at their back.
Best of luck!
Your question is based on an incorrect conception of probability. The probability of humans is either unanalyzable, or unity. If you want to attempt to analyze the probability of humans, you certainly cannot do it by counting zygotes. That assumes that all zygotes are in a big barrel, and someone is picking them out at random. I mean... really.... You might try to start with bits of RNA in a puddle and try to estimate the odds of coming up with human DNA... but how? We don't have the slightest idea of what conditions were on earth when life started evolving, so there's no way of estimating what the odds were of some process of which we have no knowledge. Not only that, but you just don't do statistics this way, I don't care what blather you've read in the papers. You do statistics either by taking a sample of a population, and estimating from characteristics of that sample how the same characteristics would be distributed in the population. Or you run a process multiple times and generate odds by looking at the results of the runs. Or, if you've got lots of information, e.g., that a die has 6 sides and it's shape and weight are evenly distributed, you might use that information to predict odds. Ok? So, which of those do you have for this little question? None.
You're trying to look at all zygotes and estimate from that sample how many should be human. But that kind of sampling depends on assumptions like: the distribution of the characteristic is random in the population. If it isn't, then how do you collect your sample? It's like saying, we'll take a random sample of fish and animals and from that see how likely it is that animals have four legs. What that ratio would mean would be that if you were an alien from space taking random samples of creatures on earth you'd get x percent with four legs. But that doesn't tell you how likely it is that animals have evolved four legs, all it tells you is about alien sampling results.
And you're not going to run evolution multiple times, unless you have a few billion years to spare.
So since you have one sample, and you can only do the run once... then your odds are just the odds of your spilling your tea, after you've spilled it. Absolute certainty. You have spilled the tea. Humans have evolved. Alternatively, you can say the odds are unknowable.
Steven Ravett Brown
Given that reality is immense in comparison to myself, it is obvious that I myself, am not the most important thing of all those things in existence. Therefore, what I consist of is not important my thoughts and ideas, such as love, happiness, etc. everything that is personal to me. Having excluded all of those personal things, what is it that is most important then, of all things in existence? That is, in the processes of our making our choices in day-to-day living, what is it most reasonable to see as taking precedence over everything else, having already established that it cannot be anything of a personal nature?
Well, nothing is really, objectively important. Whether or not something is of importance is a value judgement: Surely, what is personal to you is important to you? How can you live as if your decisions, thoughts and ideas are simply of no value at all? You would simply have to give up, not bother, collapse on the floor. But after a bit you'd probably find value in getting up and dusting yourself down and getting on with things.
So, as far as immensity of reality is concerned, if you don't mind me saying, I agree that you are not important. But surely you must judge yourself to be important insofar as it is you who has to live this life. It is you who have to live with your decisions, and those close to you care about them too. And, given that they are close to you, they probably think you are important to them.
Why should anything have precedence? Certainly there is nothing in reality which determines what has precedence in value terms, although the natural environment seems quite important to all of us if one of our values is survival.
You and I, Michael, and all the immensities of the universe you refer to, are possibly nothing other than the hair on a flea's leg in some superdimension of which we cannot be cognisant. Okay: maybe we're not; maybe these immensities are 'real' and there is truly nothing other than this one universe we inhabit. Would it make any difference? Are there, in other words, scales of immensity meaningful to us, and is it of any help to our self-perception whether we are a speck in the universe or a hair on a fleas leg? Would it help if I told you that the hair on a flea's leg in our dimension is so enormous, that to a microbe it would seem like the Himalayas? I think the answer is in the negative on all counts.
But let me give you one example where they do count. When you were conceived, your existence began as 1 cell. But your grown body contains 6 billion of these; but although on the scale of 1 cell this is just another prohibitive immensity, here you are, with 'thoughts and ideas, such as love and happiness' etc. You wrote all this down and did not take note of what you wrote? Incredible! You wrote down, 'I have thoughts and ideas' and it did not occur to you what an immense privilege you enjoy! Fancy being endowed by the chance of being born with the gift of thought, the gift of love, the gift of ideas if atoms could speak, do you not think that they would call it grossly unfair that you, a mere speck of mortal dust, should be so privileged, while they, the substance of the universe, are mute and deaf and dumb and in fact does not even possess enough agency to move themselves, let alone a thought!
All right, I exonerate you: after all, we humans do have an overall propensity to be jealous of beasts of prey and lunge at every opportunity to show how clever we are at killing and destroying, while taking the possession of a mind for granted. But this general lunacy does not invalidate my point. You are Michael, you have thoughts and ideas, and one of these thoughts concerns the immensity of the material structure of the universe and the 'puny' You which presumes to want love and happiness. And you wrote all this without realising or thinking about the fact that a handful of humans on this planet, scarcely 4 billion of them, possess minds, that is: self-conscious awareness; and this feature of the universe is so unique and absolutely precious in the face of an immensity of DEAD MATTER, that you felt intimidated rather than elated and grateful and filled with a sense of something so extraordinary that the immensities out there' shrink to a cipher. A sense that the universe is a mindless morgue of matter, which yet, in one remote little corner, began a process that for all you and I know may also have begun in many other places at roughly the same time: a process I call EVOLVING VALUE.
Value: mind stuff! Life!
No other reason whatever can be put up for considering the universe at all. That, potentially, it contains values. That potentially it contains minds. That meanwhile, it actually contains values and minds. And your's is one of them. Without your mind, and my mind, and everybody's thoughts, dreams and ideas, the universe would not know itself.
Your statement 'it is obvious ', isn't at all that obvious. On the contrary Nietzsche would approved of considering oneself most important in the universe (and at the same time staying humble). That means believing in yourself, and defending like a lion your own points of view (without getting unreasonable). In fact that's an answer to your question of what is most important.
At the same time such a question doesn't suit any use. Ask yourself what do you want to do with the answer. I could say God, science, humanity, The United States, etc., but does that make one happy?
Is it possible for a society to sustain itself and have a moral order without a religious foundation for such an order?
Has there ever been a human society without a religion as the foundation for a moral order?
How do the alternatives (secular sources) for morality relate to the religious sources?
Which source (religious or secular) is superior or better in any way and why?
What are there so many difficulties in arriving at a definition of religion? What are the criteria or considerations in developing a definition? What is religion?
How can one give a position in a distinctly and explicitly philosophical fashion, at the same time being critical and comprehensive in developing and defending a position?
That's a 'nasty' question, and without doubt essential.
First let's make the distinction between religion and belief.
The word religo' in Latin means to bind', that meaning speaks for itself.
My personal opinion is that religion' tends to absolutism', while beings need (relative) beliefs.
Or said in another way: 'God hates religion', or 'beliefs want to be free'.
Every knowledge-system is based on beliefs. That doesn't mean that it has to become 'religious' (used in the sense of dogmatic), but the danger is always there. So to answer your first question: it is possible, but it proved to be difficult.
The answer to your second question I don't know, but I fear not yet.
The alternatives for religion (or dogmatism, determinism, absolutism, fundamentalism) are in my opinion found in relative views. Mind that there is nothing wrong with authoritarian knowledge, but it should be compensated (otherwise teachers become gods). Giving an example would be wrong, because not the view is important, but the way of viewing. Not necessary secular, because secular beliefs can become very dogmatic (as proved in Stalinism).
In essence it is the controversy between Popper and Kuhn, or the distinction between absolute and relative knowledge (Kuhn basing himself on Wittgenstein, and Wittgenstein (possibly without knowing) on Nietzsche. I respect very much both Karl Popper and Thomas Kuhn, but both stressed very much one side of the coin.
Any learning phase turns out to be mainly authoritarian (strict democracy in education failed), but should be followed by using your own creativity. That's what Nietzsche stressed with his 'Superman', he warned for religion but cherishes any free belief (even if he personally doesn't agree, but he admires fanatic (but reasonable) defending of own convictions).
It's like studying philosophy and thirsty for knowledge drinking the views of your professors (but hopefully mainly their methodology), and afterwards using the acquired knowledge to come to and defend your owns views. So studying is not about copying views of your teachers, but learning the means that are purposeful for you.
That's exactly what makes studying philosophy confusing. In most studies it is clear after ending that have you learned various methods, but in philosophy there is always the danger as well of being drowned in views. This is the 'religious' danger of philosophy, often without realizing you become being 'bound' by the views of some professor and learn to defend those.
I suppose you are aware that your question entails a lifetime's worth of study!? To deal with such a brief in a couple of paragraphs is simply not possible. The best, I feel, I can do for you in this restricted compass is to make a few elementary points, hoping that someone qualified in theology will add to it (and I take the risk of contradiction in my stride). So in relation to your first question, it is indubitably possible to have a moral social order without a religious foundation. Not only is it possible, but classical Chinese was a living example of it. Confucianism, although sometimes styled a secular religion, was essentially a social structure based on humanitarian principles adopted from Confucius and his followers (Mencius, Hsun Tsu) and adapted to the living needs of society. Its religious component was restricted to the performance of certain rites, which were never in our (Christian) sense religious, but a simple straightforward act of contemplation and human piety. Confucianism was, whatever the actual practice may have changed from time to time, an essentially and intrinsically secular doctrine and reigned as the dominant doctrine in China for nigh on 1500 years.
The others of your questions need to be addressed in a different kind of context. Whether religious or secular structures are superior is, I think, a non-issue. If history may be asked to 'prove' anything at all, then it can show at best that religion is 'good' only for two types of societies: those which are in an anarchic shambles and need the cohesiveness of a single doctrine imposed from above, and those societies which permit the individual the choice of their religion. China, ancient Greece and modern Europe/America belong among the latter type.
The difficulty of arriving at a satisfactory definition of religion is, simply, that majority opinion is not a philosophically relevant criterion. What we style 'higher religions' is, in my view, simple prejudice: we represent a higher type of civilisation, ergo our religion must be higher. I stress that this is just my opinion; and this pertains still when I add that the critical reflection on religion cannot ignore the fact that humans have throughout history (nomadic and prehistoric included) demonstrated a clear propensity for anthropomorphic thinking: we followers of Jehovah and Christ delude ourselves that we have a 'purer' concept; but against this it can be argued that (a) very little in the Christian philosophical literature is clearly non-anthropic and the little there is has almost no influence on the shape of the religion in either of its two major denominations; and (b) even the 'pure' concept cannot claim intrinsic superiority to the simple shamanic conception of good and evil spirits residing in plants and animals and human body parts. Pure or simplistic, both concepts have, critically assessed the same validity; and if critically you feel compelled to doubt one, then this automatically disqualifies the other (assuming no bias intervenes). To some extent, I would argue that a critical assessment of a religion is a non sequitur in any case: religion is principally a matter of belief, in the second instance a possibly metaphysical state of mind. But if one were to take the idea of a critique seriously, then you would have to first disown religion and seek a rationale for believing in a God. You would find that, I think, almost impossible.
No matter what any person believes, the image of some sort of God will come into their mind. Even if they do not believe in that image, they will still hold an image of GOD in their minds, so that they can reject it. Therefore; GOD has to exist in the mind of the most ardent non-believer.
Since a belief is only a concept or a perception and a non-belief is the opposite of another person perception, both concepts can have no true proof of meaning in the existence or non-existence of God.
A Belief needs some doubt of the truth, for if there is truth, there is no need of belief. Therefore: A belief in God can never possess sufficient validity or proof.
But, a thought of God exists in everybody's Mind. And a thought is beyond any belief or non-belief.
The image of GOD that the non-believer wants to dismiss, still stays in his/ her mind and therefore must exist in that persons life. For something not to exist, it cannot be experienced in a thought.
This is proof beyond any Doubt... That GOD exists in every person's mind.
No matter what any person believes, the image of some sort of unicorn will come to their mind. Even if they do not believe in unicorns, they will still hold an image of a unicorn in their minds, so that they can reject it. Therefore, unicorns have to exist in the mind of the most ardent non-believer.
But a thought of unicorns exists in everybody's mind. And a thought is beyond any belief or non-belief.
The image of a unicorn that the non-believer wants to dismiss, still stays in his/her mind and therefore must exist in that person's life. For something not to exist, it cannot be experienced in a thought.
This is proof beyond any Doubt... That UNICORNS exist in every person's mind!
Perhaps you can see the problem, now?
Steven Ravett Brown
"No matter what any person believes, the image of some sort of God will come into their mind."
From this assumption, Mike proceeded to argue:
"That God exists in everyone's mind."
You can see he hasn't really made any progress. In order to demonstrate something, he first assumes it!
Unfortunately, although we may agree that the assumption is LIKELY to be right, it has a form that makes it very hard to prove. To prove that every member of some class has a particular property, it is necessary to either:
(a) examine every member exhaustively, without exception; or
(b) demonstrate that that property is a necessary consequence of membership in the class.
Of course, it is easy to disprove the assertion, if we can find just one counter-example. I give you the mentally handicapped man who lives in our street. He is a "person" and doubtless has beliefs. But he has no words. I will not assert that he has no image of God. Rather, I will ask you how he might acquire one? And if he did, how would you demonstrate that?
Which philosopher wrote about different kinds of love?
What is knowledge and how does one obtain it?
There are four classes of knowledge:
1. Tacit. This covers the large segment non-specifiable knowledge which is transmitted mostly by example. The teaching of skills like violin playing, or surgical diagnosis or artistic photography, belongs into this category, where instruction relies on demonstration rather than precept and where, as a pupil, you acquire the knowledge you need by trial and error and a close involvement with your materials. Hence it is the kind of knowledge in which judgement is all-important and where no one person is in possession of the complete range of knowledge that pertains to any single knowledge area.
2. Skeletal and/or Unfocused. This kind of knowledge arises when you absorb focused information, but owing to its complexity or sheer volume, you remember it as a generalised, unfocused structure of knowledge. It is the kind to which the proverb, 'Knowledge is not the having of it, but the knowing of where to get it', applies. So you hold fast to outlines, blocks and general patterns as well as, evidently, the means of acquiring the details to flesh out this skeleton.
3. Articulated. This type of knowledge is detailed and focused. This is where you command not only the structure, but the content as well. A memorised poem or train timetable will serve as examples. Obviously this cannot, on the whole, do without Item 2 also being available, because human minds can only hold so much detail. Of tremendous importance to this type of knowledge is, in addition, the means of hanging on to it. Depending on which area of knowledge is covered, this might entail laying it down in books, using abbreviations and aides-memoire etc.
4. Symbolic. This covers areas which we employ quite generally, though we are rarely explicitly aware of it. We use signs, allegories, indexicals, metaphors constantly; and these are an indubitably form of knowledge. It comes out, for example, in such a common adage as 'birds of a feather flock together', which is not an ornithological statement, but a general statement reflecting a certain knowledge of human behaviour.
These are matched by four knowledge acquisition systems, respectively:
1. Phyletic, concerning inherited memories, genetically transmitted characteristics, archetypes, subconscious conditioning and so on. A lot of knowing 'how to' is passed on through such means, as any mother of a baby will know from the day it is born. 2. Cognitive, which relates to understanding and comprehension. 3. Aesthetic, in the two-fold meaning of sensation and perception. Finally, 4. Prehensive, which is concerned with the physical objects, but since most of these do not come with a handle by which to grasp them, it also serves as a term to embrace the classification of knowledge under rubrics like weight, mass, volume, density, distance and so on.
A clever question, because almost everybody takes the word 'knowledge' for granted. Knowledge is in my view that part of a fantasy that you share with others. So it can be seen as a shared fairytale, or a shared game.
This it not to discredit knowledge, but my relative view. Knowledge to me is not 'absolute'. In fact it is not really important, it's the process of acquiring it that it is all about.
Studying philosophy is not meant to gather knowledge, but to acquire means to grasp it. Professors are not meant to lecture you their views only, but above all to give you means to construct your own views. And to teach you ways to evaluate these and those of others.
Knowledge is not to be obtained, like for instance money. The process of acquiring it makes you owner of the knowledge. In fact the two cannot be separated (they form a mathematical unity). To posses knowledge you need to have acquired it, and after personal acquiring you automatically own knowledge.
Compare it to the noun 'work', that includes effort, and without effort there is no work. Only you can let others do your work, but you can't just take their knowledge.
You can have workers killed after their effort, but killing wizards would be like killing the hen with the golden eggs. Without hen no golden eggs, and without wizards no knowledge. That is why Merlyn the wizard always was save, while knights were exchangeable.
Wizards never explain you how they realize their tricks, and as well you have to gather your own knowledge.
What is the justification for the use of discipline/ punishment in schools?
Discipline is necessary in any organisation. The problem is, how to enforce that discipline. The problem becomes even more acute when dealing with children; but there is no avoiding the fact that discipline is a vital part of a young person's education. A young person requires guidance until s/he reaches an age where they can make judgements and weigh-up situations for themselves. Education is not just about academic subjects, it also includes, what was called by my generation, learning how to become a good citizen. Part of this education was learning good manners, learning to respect others, and a general idea of what was considered good and what was considered bad or evil. A major factor, which people often overlook, was self respect; it is difficult to respect others if we have no respect for ourselves. Underpinning all this was the firm imposition of discipline.
Of course, discipline of children was always considered to be the main responsibility of parents, the home was where good manners were expected to be taught, where parents were expected to set a good example, however, schools did take it upon themselves not only to back-up parents but to take a lead. A good hiding for a misbehaving pupil at school was acceptable to all, and was expected from the most famous public school in the land down to the most basic Council School. It was not unusual for a young person to be punished in school and on arriving home to discover that their parents knew of this would be subject to either another good hiding, or being sent straight to bed without their evening meal.
For those of us old enough to make the comparison there is, without doubt a massive paradigm shift in received knowledge regarding morality and ethics. This shift from Victorian times, through Edwardian Times, through the twenties and on into the sixties was hardly noticeable. the changes in the structure of society the began to accelerate at such a pace that the culture shock had a devastating effect on relationships between parents and their, particularly, teen-age offspring. Still living in the past paradigm and reluctant to accept the changes taking place, the earlier generation were left gasping like goldfish in a bowl that had been deprived of nearly all their life-giving water. The gap between the previous and the developing generation opened rapidly, the relationship between young people and adults changed as a different concept of discipline replaced the old concept. One of the major changes was the different attitude towards discipline in schools, corporal punishment was eventually banned, and a rather strange shift of emphasis from discipline of the young person to protection of the young person took place. Physical contact between teacher and pupil took on a sinister meaning: teachers began to feel confused and helpless, their world was turned upside down almost overnight. I distinctly remember a friend of mine almost losing his job because he placed his hand on the shoulder of a girl as she refused to step into line in the play-ground, blatantly refusing to obey the teacher she was quite prepared to hold up the entire school. She told my friend to take his hand off her shoulder because that was a form of assault and he would be reported. He appeared before the Head and the girl's parents the following day, was completely humiliated and warned about his future conduct. Bizarre events began to be reported in the media, like the one where the teachers were besieged in a high school by angry pupils who had taken offence at the attempted imposition of new instructions. Damage was done to staff cars, tyres let down, etc. before the police arrived to disperse the pupils.
The relaxing of discipline in schools initially brought about a lowering of academic standards, a high percentage of pupils leaving primary schools were poor at reading writing, Whether co-incidence or not, there was a continuing rise in truancy, vandalism, attacks on the elderly by young people, drug addiction in the young, thieving and general criminality in all areas. Along with this came a marked increase in general bad behaviour, decline of good manners, and lack of consideration and respect for others.
We must not overlook the fact that other changes had taken place in society which coincided with school discipline problems, the main one being changes in family life. The increasing divorce rate, a lack of interest in the sanctity of marriage, children having two homes with father and partner in one and mother and partner in the other, and so on. In addition to all this came a rapid decline in church going, and the almost total decline of Sunday Schools, both have had an effect on the discipline of young people. Ironically, these changes, which to some are catastrophic events, throws the responsibility for discipline back on the schools.
It is, of course, not the case that all young people fall into the categories indicated, I believe we have all met or know some wonderful young people who are a credit to their school and to society. Unfortunately, the daily exposure of the criminal behaviour of young people shows a continued increase. Even as I write this I am informed by the media that 17,000 juveniles stole vehicles last year. Add the thousands of young drug addicts, muggers and thieves over the same period, and we get the feeling that we are not just on the verge of anarchy, but that it has already arrived. It seems that not only do we need real discipline back in schools, but we also require a tightening of the lenient laws governing this country. When the do-gooders that brought all this about realise that there is a difference between assaults on children and regulated punishment for their own good, and for the good of society, we shall happen see a change back to normality, but don't hold your breath, and don't forget that the bias has swung so far towards the welfare of young people that parents can be taken to court by their own offspring who have decided that discipline is not for them.
I am obsessed with this mind issue. Last answer was highly satisfactory. I began to read the references you sent me. Thanks a lot. In this one I want to bore you with some novice theories about mind, if you allow me.
I believe that mind works pretty much a Hierarchical Petri Net. In each level there is one statement, that can be modeled with places and transitions.
Let "Is Eric Good?" be a question asked to me. Let this question in its context be defined with two places and one transition, when I am subjected to this question.
In lower levels, I have multiple small nets in various levels and details connected to the concept "Eric", as well as "Goodness".
"Eric hit me", "Eric's sister is Jane", "Eric went to college" etc. "Elephants are good", "Bad is not Good", "Getting beaten is bad", etc.
These chunks are further linked to others like "Jane is a girl", "Jane is good".
I believe that man keeps his experiences in chunks like the ones above, very tiny nets with one or two places and transitions. However when asked a question, or while making an exercise man gets into these small chunks, decomposes each of these transitions and places, forms up new nets during run time, and aggregates them to come at a conclusion which is also a tiny net.
Turning back to my example, I conclude that "Eric is Bad". When asked the question, I simply run through the nets in the lower level, associate "Eric hit me" as "Something Bad" and conclude that "Eric should also be bad".
Now what I do is not just finding a "path" in the mathematical sense. If it was so I could easily come up with "Jane is Good" as an answer to the question. The process involves "path finding" but is much more complex.
Anyways, I was thinking that this was a good beginning for a modeling effort about how mind works and wanted to share it with you. I need to read more about this type of research.
Could you recommend me a couple of books/articles: 1) that looks at the issue through hierarchical nets and graph theory, preferably Petri Nets. 2) that contains my theory about runtime graph formation. 3) If people tried to explain this thing through graph theory and could not succeed please let me know about that as well. I do not want to waste a life on this :)
I assume you know about this site, then: http://www.daimi.au.dk/PetriNets/.
But here's one general point. There are many, many ways to approach and to model the mind. Various types of networks are one, of which Petri and graph theory are two categories. It may be that they are formally identical. I will give you some other references below; you can find all the Petri stuff you want at the above site. The danger, if you have not yet done much reading, is to take one of those as the way, and spend enormous amounts of time and energy at it. This is the danger of too narrow an education in any field. IF your goal is to take the Petri Net and elaborate it, and see what can be done with it, then, fine, go with that particular approach. IF your goal is to model or to think about the mind, you will be taking one out of dozens of approaches and effectively saying that it is the correct one. But it isn't, and here's why (and I'm sure many Petri people and others will disagree with this): the brain is an analog system, not a digital system. A computer is a digital system. Can a digital system "model" an analog one? Yes, certainly. What does "model" mean? Now, there's the question. I will not go into that here; suffice it to say that there is ongoing debate on that point. Second, can a digital system duplicate, functionally (obviously it could only be functionally), an analog system? This is another question, not the same as the last. If you want to create a mind in a computer (a digital computer), you've got major problems, I think... indeed, I don't think it's possible. But you can model one, up to a point. There's an important distinction here that many have missed.
So, what approaches should one take to a) model and b) duplicate mind? But you see that these are two different questions. Your approach above will not duplicate mind. But it might model it to a certain extent.
For history (and good background) in modeling and networks:
Ashby, W. R. Design for a Brain: The Origin of Adaptive Behaviour. London: Chapman and Hall Ltd., 1960.
Rosenblatt, F. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Washington, D.C.: Spartan Books, 1962.
Minsky, M., and S. Papert. Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: The MIT Press, 1969.
McCulloch, W. S. Embodiments of Mind. Cambridge, MA: The MIT Press, 1970.
McClelland, J.L. Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Psychological and Biological Models. Edited by J.A. Feldman, P.J. Hayes and D.E. Rumelhart. Vol. 2, Computational Models of Cognition and Perception. Cambridge, MA: The MIT Press, 1986.
Rumelhart, D.E. Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations. Edited by J.A. Feldman, P.J. Hayes and D.E. Rumelhart. Vol. 1, Computational Models of Cognition and Perception. Cambridge, MA: The MIT Press, 1986.
Dreyfus, H. L. What Computers Can't Do. Cambridge, MA: The MIT Press, 1972.
Gurwitsch, A. The Field of Consciousness. Edited by A. van Kaam, Duquesne Studies: Psychological Series. Pittsburgh, PA: Duquesne University Press, 1964.
Husserl, E. The Idea of Phenomenology. Translated by W. P. Alston and G. Nakhnikian. Fourth ed. The Hague, Netherlands: Marinus Nijhoff, 1970.
Merleau-Ponty, M. Phenomenology of Perception. Edited by Ted Honderich. 1st ed, International Library of Philosophy and Scientific Method. New York, NY: Routledge & Kegan Paul, 1970.
Relatively early cognitive & modeling refs:
Allport, A. "Visual Attention." In Foundations of Cognitive Science, edited by M.I. Posner, 631-682. Cambridge, MA: The MIT Press, 1989.
Deese, J. The Structure of Associations in Language and Thought. Baltimore, MD: The Johns Hopkins Press, 1965.
Fodor, J. A., and Z. W. Pylyshyn. "Connectionism and Cognitive Architecture: A Critical Analysis." Cognition 28 (1988): 3-72.
Gardner, H. The Mind's New Science. New York, NY: BasicBooks, 1985.
Gregory, R. "Perceptions as Hypotheses." Philosophical Transactions of the Royal Society of London Series B, Biological Sciences 290 (1980): 181-197.
Grossberg, S. "How Does a Brain Build a Cognitive Code?" Psychological Review 87 (1980): 1-51.
Johnson, M. The Body in the Mind. Chicago, IL: University of Chicago Press, 1987.
Koffka, K. Principles of Gestalt Psychology. 2nd ed. New York, NY: Harcourt, Brace & World, Inc., 1963.
Mead, C. Analog Vlsi and Neural Systems. New York, NY: Addison Wesley Longman, Inc., 1988.
Neisser, U. Cognitive Psychology The Century Psychology Series. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1967.
Pollack, J.B. "Recursive Auto-Associative Memory: Devising Compositional Distributed Representations." Proceedings of the Tenth Annual Conference of the Cognitive Science Society, Montreal. Cognitive Science Society 1988.
Posner, M.I., and S.J. Boies. "Components of Attention." Psychological Review 78, no. 5 (1972): 391-408.
Rosch, E., C.B. Mervis, W.D. Gray, D.M. Johnson, and P. Boyes-Braem. "Basic Objects in Natural Categories." Cognitive Psychology 8, no. 3 (1976): 382-439.
Shallice, T. "Information-Processing Models of Consciousness: Possibilities and Problems." In Consciousness in Contemporary Science, edited by A.J. Marcel and E. Bisiach. New York, NY: Clarendon Press, 1988.
Shepard, R. "Attention and the Metric Structure of the Stimulus Space." Journal of Mathematical Psychology 1 (1964): 54-87.
Shiffrin, R. M., and W. Schneider. "Automatic and Controlled Processing Revisited." Psychological Review 91, no. 2 (1984): 269-276.
Treisman, A.M., and G. Gelade. "A Feature-Integration Theory of Attention." Cognitive Psychology 12 (1980): 97-136.
More modern refs:
Baars, Bernard J. In the Theater of Consciousness: The Workspace of the Mind. 1st ed. New York, NY: Oxford University Press, 1997.
Chang, F. "Symbolically Speaking: A Connectionist Model of Sentence Production." Cognitive Science 26 (2002): 609-651.
Craik, F.I.M. "Levels of Processing: Past, Present . . . And Future?" Memory 10, no. 5/6 (2002): 305-318.
Demetriou, A., G. Spanoudis, C. Christou, and M. Platsidou. "Modeling the Stroop Phenomenon: Processes, Processing Flow, and Development." Cognitive Development 16 (2002): 987-1005.
Dipert, R.R. "The Mathematical Structure of the World: The World as Graph." The Journal of Philosophy 94, no. 7 (1997): 329-358.
Fauconnier, G., and M. Turner. "Conceptual Integration Networks." Cognitive Science 22, no. 2 (1998): 133-187.
???. The Way We Think: Conceptual Blending and the Mind's Hidden Complexities. New York, NY: Basic Books, 2002.
Gernsbacher, M. A. Language Comprehension as Structure Building. Hillsdale, NJ: Lawrence Erlbaum Associates, 1990.
Gopnik, A., and A.N. Meltzoff. Words, Thoughts, and Theories. Edited by L. Gleitman, S. Carey, E. Newport and E. Spekle, Learning, Development, and Conceptual Change. Cambridge, MA: The MIT Press, 1998.
Grossberg, S., E. Mingolla, and W.D. Ross. "Visual Brain and Visual Perception: How Does the Cortex Do Perceptual Grouping?" Trends in Neurosciences 20, no. 3 (1997): 106-111.
Halford, G.S., W.H. Wilson, and S. Phillips. "Processing Capacity Defined by Relational Complexity: Implications for Comparative, Developmental, and Cognitive Psychology." Behavioral and Brain Sciences 21 (1998): 803-865.
Harnad, S. "The Symbol Grounding Problem." Physica D 42 (1990): 335-346.
Kahana, M.J. "Associative Symmetry and Memory Theory." Memory & Cognition 30, no. 6 (2002): 823-840.
Libet, B. "The Timing of Mental Events: Libet's Experimental Findings and Their Implications." Consciousness and Cognition 11 (2002): 291-299.
Maddox, W.T., F.G. Ashby, and E.M. Waldron. "Multiple Attention Systems in Perceptual Categorization." Memory & Cognition 30, no. 3 (2002): 325-339.
Reisberg, D. Cognition: Exploring the Science of the Mind. 1st ed. New York, NY: W. W. Norton & Company, Inc., 1997.
Rieke, F., D. Warland, R. de Ruyter van Steveninck, and W. Bialek. Spikes: Exploring the Neural Code. Edited by T. J. Sejnowski and T. A. Poggio. 2nd ed, Computational Neuroscience. Cambridge, MA: The MIT Press, 1997.
Rizzolatti, G., L. Fadiga, V. Gallese, and L. Fogassi. "Premotor Cortex and the Recognition of Motor Actions." Cognitive Brain Research 3 (1996): 131-141.
Sloman, S. A., B.C. Love, and W.K. Ahn. "Feature Centrality and Conceptual Coherence." Cognitive Science 22, no. 2 (1998): 189-228.
Sun, R. Duality of the Mind: A Bottom-up Approach toward Cognition. Mahwah, NJ: Lawrence Erlbaum Associates, Inc., 2002.
Wegner, D. M., and J. A. Bargh. "Control and Automaticity in Social Life." edited by D. Gilbert, S. T. Fiske and G. Landzey. Boston, MA: McGraw-Hill, 1996.
Yaniv, I., D.E. Meyer, and N.S. Davidson. "Dynamic Memory Processes in Retrieving Answers to Questions: Recall Failures, Judgments of Knowing, and Acquisition of Information." Journal of Experimental Psychology: Learning, Memory, & Cognition 21, no. 6 (1996): 1509-1521.
The above is a bit, a slice, a small example, a mere taste, of what is out there. Dip into it before you go much further in your own thinking.
Steven Ravett Brown
Time travel is an extremely interesting subject, but is it really conceptually possible?
My straightforward answer is no, it is not possible, no matter how you bend it. But if I left it there, someone else will say, it is conceivable under such and such circumstances. So I'm going to have to invite you along on a little journey of problems, just two or three of them, but all bristling with way-out complexities. I'll try and make them as easy as possible, because it's worth thinking about these matters, and also because our lives are so much under the influence of science and science fiction today that the average person can hardly make out what to believe. And by golly, time travel is part of the fare! You must have noticed how much it is taken for granted, as if there were no argument about it!
Well now, since we have to start somewhere, let's take a peek at the 'space of all possible things/ events/ ideas'. Somewhere in this space you'll find time travel and no doubt millions of other ideas, thoughts, objects, events and possibilities that have been dreamt about. They are all in this 'space' as potentials, waiting to be realised. Yet the first thing to note about the 'space of all possible things etc.' is this: there is no such space; for even the 'space' itself, the concept of this 'space', is part of the 'space of all possible things etc.'! Hence it is not a real space, not a finite, three-dimensional volume, where things happen. So you understand that I'm talking about a conceptual space, an infinite realm with infinite possibilities that (so to speak) travels along with our finite realm of real things and real possibilities. It is the realm of the 'Maybe'.
The importance of this concept of infinity is not well appreciated, certainly not by time travellers. They tend rather indiscriminately to toss finite and infinite states around as if they were lego blocks. They talk about 'worm holes' and 'black holes' and 'big bang', and of 'string theory' and 'quantum flutters', which are all entangled with infinity. But consider that infinity means, by definition, that you can't count what's in it. So when you ask, how many atoms in the universe, you are immediately defining the universe as finite.
Having got this far, what about time? Well, it's really the same problem all over again. Is the universe in 'time' or not? Is time 'in' the universe or independent? Astronomers want to convince us that time was created with the big bang, but there is a big chink in that logic. For if the spread of time is finite, then of course the universe must be finite. And vice versa. But if the universe is finite, then we've only pushed the problem of infinity out of the way, because we are then supposing another universe which must contain ours; and that universe is probably contained in yet another: Russian doll universes all the way down. In philosophy this is called 'infinite regress'.
We're obviously getting ourselves into a huge mess. Let's narrow down our focus and note down a sort of definition: 'God invented time to prevent everything from happening at once.' This gives us a vital first clue to what's wrong with time travel. On this definition, time is a concept of simultaneity. It means that if two separate objects/ events occur such that third parties observing them agree in their happening at the same instant, these parties then have a means of plotting the events on a graph, marking their lines of approach and departure and assigning values (seconds, hours, days) to all changes in position. This graph is a 'frame of reference', which can now function as a tool for establishing the simultaneity of all events that fall within its scope. Evidently to make this work, a point at rest has to be presupposed, called the 'residual observer', around which the other events revolve.
Now another difficulty comes up. When you have three, four, a thousand, a billion frames of reference, practically all of them unknown to us because of the sheer size of the visible universe, the notion of simultaneity suddenly runs amok; our little graph just can't cope any more and you'll find that a second residual observer becomes necessary, then a third, a fourth...and in an infinite universe...? You guessed it: an infinite number of residual observers. Where does that leave our simple concept of time? Doesn't it mean there are as many times' as residual observers? True again.
So this doesn't get us anywhere. We're attacking the whole problem back to front. To find out 'what time really is', we need to put ourselves in the seat of time itself. We need to ride along with time on a beam of light. So let's now confront this issue with a 'practical' example.
Let's say you've been despatched from Earth to Alpha Centauri. In earth terms that trip is going to take four years at the speed of light; that's not time travel, but it will serve for an opener. When you last looked back, you might have seen your parents standing there, waving goodbye. A couple of days later, you look again and still they're there. Patient people! But when you look again a year later and find they haven't moved, you are suddenly jolted into the realisation that, of course, their image is travelling at the same speed with you. Time is standing still for you in relation to that scene.
Now difficult as it may be, try and draw a sound conclusion from this. These are not your parents, but merely their image. What then, if you could suddenly double back and return? The point is: nothing changes; and when you arrive, to your parents you will only have hovered in the stratosphere for a while and then come back down.
Now clearly this is nonsense. You've been en route for a year! Consequently there is an irresolvable contradiction: you cannot, as a physical body, be in two places at the same moment, but this is what the story entails.
It gets worse when you really start time travelling. Imagine yourself accelerating beyond the speed of light. As you gaze out the porthole, you'll see start seeing things you shouldn't: ice ages, continental drift, the earth aflame like a drop of molten iron etc etc. On our diagram of Earth, Alpha Centauri and yourself, your numbers are running into the negative: you've reversed the time relation between you and planet Home.
Now there is another side to this story. To observers on earth you would first dissolve and then disappear. Conventionally we take this to mean that the speed of light can only be attained by electromagnetic radiation (EMR), accordingly your acceleration has the effect of converting you and your craft into EMR. But this in turn means that, in relation to Earth, you have ceased to exist. You cannot therefore simply double back and hurtle back to Earth. She won't be there when you arrive. On your diagram, where Earth and Alpha Centauri comprise a frame of reference in close simultaneity, you have removed the residual observer, yourself.
But ah! you cry, even if I can't return to Earth, yet this is time travel, isn't it? Can't I now connect with another frame of reference?
Well, I promised you this was going to be complex, mind-boggling and irritating. For while you may conceivably exceed the speed of light in relation to your own system, you cannot exceed it in relation to light itself. Here the equation is EMR = Time. The grain of EMR in the universe is also the grain of time, and the best or simplest way to make sense of this is to reverse the notion of speed. To attain the speed of light means, in this context, for you to become decoupled from any frame of reference whatever because you have become connected to the stream of time/ light itself. But this 'stream' being the grain of time itself, means you are standing still again, only this time in relation to the whole universe. Then the objects of the physical universe, galaxies and nebulas and novas, will be fizzying around you in a bewildering torrent of criss-cross patterns across the entire 'sensurround' horizon. Indeed some or many of these objects may actually 'collide' with you, at the speed of light (!).
One last question: could you not 'decouple' from this unwished-for state and return to a definite existence? Unfortunately the answer, once again, must be 'no'. I keep saying 'you', as though there was a 'you' in this EMR stream. But of course, there's not: you have become a beam of light, pure EMR, which contains not the thinnest thread of information. Once upon a time, in your real life, 'you' were (among many other things) a packet of information; this is now gone, terminally erased. And this is of course the real crux of the matter.
Simultaneity is the coincidence of objects (information) in a frame of reference: and all these frames of reference are finite entities which might all, in principle, be co-ordinated in a network of finite observers. But 'behind' this structure is the structureless grain; picture it like a single dew drop somewhere in the midst of the Sahara desert. And in this structureless space all events occur simultaneously, just as the sand in the desert 'occurs' all at once; but for us, who have a finite perspective on them, these events occur in sequence and under conditions to which the concept of simultaneity can be fitted.
I hope all this makes sense to you! If you wanted to put it into a nutshell, you could say that time travel cannot happen because time is not real: it's not a road or a space or a field where you can identify Point A and Point B in relation to one permanent, unchanging residual observer. It is (as I said) the idea of some things occurring measurably simultaneously. So the crucial component (you might have picked this up when you recognised your parents as only an image) is this: that light waves bearing images are not physical reality. On this discrepancy the whole fancy breaks apart. Time travel, so understood, is mistaking a 'report' for the event itself; and of course a report can long outlast the event which has meanwhile ceased to be.
And this brings us back to the 'space of all possible things', where we started. Here simultaneity is meaningless, because in an infinite space nothing is simultaneous with anything else, there is no frame of reference and no residual observer; and indeed, there is nothing whatever in this 'space', not an atom, not a breath. Just dreams of finitude, of finite possibilities. Dreams of being, for nothing in this 'space', nor the space itself, has being.
It Depends what you mean by 'conceptually possible'. I would say that time-travel is logically possible because there seems to be no contradiction in the concept (which is obviously very different from sayings its physically possible in our world.)
The interesting questions, as far as I can tell is what known as the grandfather problem. Suppose that time-travel is possible. Now, suppose you go back to the time when your grandfather is in his youth and you kill him this would mean that in the future, there will be no you. But then how could you have come back from the future and killed him?
Here I agree with David Lewis. He reckons that time-travel is possible but you can't change anything in the past. This is because he thinks of time as a big line and each point is equally real. Consider time T, when you travel back to point T*. Now, Lewis wants to say that point T* is equally real when you travel back as when you are there at point T*. The only difference is your perception of T*. The answer Lewis gives to the grandfather problem is that you can't kill your grandfather or change anything for that matter, for the reason that you were there already. This sounds weird but if you think about it it makes sense.
Travelling through time is something we all appear to do every day, this morning I was in the past but now I'm in the present which was the future! I assume however what you are talking about is when an individual travels to a time outside of the ordinary scope. There's an interesting article in Le Poidevin & McBeath's book The Philosophy of Time on the subject but I can't remember who wrote it, however here are two key issues.
First if we were to travel back in time it would appear possible that we could change the past, possibly causing a causal loop whereby our actions in the past affect the way we are in the future. Second there is the ontological status of the past and the future.
To deal with the first problem, consider the 'Back to the Future' scenario where the character potentially stops his mum meeting his father and therefore prevents his own existence. If this were to happen however it would not be the case that in the future that he could go back and prevent his own existence. The argument therefore entails that if he can prevent his own existence then he can't prevent his existence. The other apparent way to avoid this problem is to suggest that you can't affect the past when you go back, but this is somewhat strange. The way around this problem is to say that the Time traveller can affect the past however he can't change it. the 'past'' is already a determined system which the time traveller may cause an event in but any event that he causes will have already happened. He is therefore free to affect the past but he cannot change anything that happened in it.
The second issue is whether there is anywhere to travel to. There are two main positions on time which broadly are the tensed view and the tenseless view. Without going into the positions too much the tenseless view of time is that there is nothing ontologically privileged about the 'present' that we perceive, all times are equally real, thus this position is somewhat analogous to the conception most of hold on space where there is nothing special about 'here' rather it is just the place we happen to occupy. If you are a tenseless (b-theorist) theorist then there clearly is a 'place' to go to when you time travel.
The second position that is held is the tensed theory (a-theory) of time whereby there is something privileged about the present, namely it is the only time that is present. Time flows from the future into the present, and the present to the past. One of the main motivations for this position is that it allows us to hold that the future is open and allows for a non-deterministic position of the world. The a-theorist has more work to do than the b-theorist at this point as for the a-theorist three main positions are viable:
a. Only the present exists.
Now depending on which of ac you accept you're potential to travel to those places is affected, clearly if you hold a then time travel is a priori impossible, if b then you can't go to the future.
b. The past and the present exist.
c. The past present and future exist.
There are other issues but I feel these are the main two. As I say if you have an interest in time I strongly recommend Le Poidevin and McBeath's anthology [The Philosophy of Time. Oxford University Press 1993].
David Gerrold in his classic 70's sci-fi novel The Man Who Folded Himself (new edition published by BenBella Books, 2003 forthcoming) describes a version of 'time travel' where the time traveller hops to alternate time streams. For example, you could hop to a time stream where it is September 10, 2001 and foil the terrorist attack on the Twin Towers. That might make you feel good for a while. Until you realize that all you have succeeded in doing is prevent the attack in an alternative universe. In the actual universe, what happened happened, and can't be made to unhappen.
For more on Gerrold's time travel universe see my Afterword to The Man Who Folded Himself.
I've come up against an idea that won't budge. Perhaps you will see my error, or perhaps you can direct me to relevant literature. Here goes:
I've come to think it is impossible to imagine a universe in which I do not exist. Because in order to perceive of that imaginary place, I must have some sense data of it. And in order for there to be sense data, there must be some existing thing that senses.
It's as simple as the logical contradiction involved in imagining yourself being in a room in which you do not exist. You "look around", but you are not there. But what is doing the looking? Is it possible to imagine that nothing there exists and yet looking happens? I don't think so. I think we just ignore the fact that we must "be" in this universe-that-doesn't-contain-us, and sidestep the contradiction. I think we must postulate our existence in that universe in order to perceive it, and in doing so we violate the premise of our non-existence.
Even though it is subject to the whim of imagination, I cannot so bend the rules of logic to imagine that I both am and am not in the same place. I cannot imagine what that would be like.
I used to think (just this morning!) that it was a simple thing to imagine a universe in which I didn't exist. Now I think that to the extent that I can imagine it, I violate the premise of my non-existence.
First, you are confusing two senses of "imagine". One is "perceive" and one is "think of". Obviously you can think of a universe in which you don't exist... you're doing it above. Second, you can think of being in a room, the furniture in it, etc., etc., with as much detail as you want... but you still don't have to be visualizing that room. So the dilemma you're having is that you can't conceive of visualizing something without a viewpoint from which to visualize it, and that implies an observer. Ok fine. Perhaps it does. However, first, that observer doesn't have to be you as you actually are. For example, you could visualize a room from 1/4 inch above the floor, or from the viewpoint of an ant, right? Now, how could that be you? So just whose existence are you "postulating" there? No particular existence; it's just that in order to visualize, you need a reference point. You're postulating a point of view, not any particular observer. Of course, you could object that your observer is utilizing visible light instead of x-rays or sonar, and so that implies that observer is restricted by what you can conceive of... ok, fine... and...? So in order to visualize something you must do it in ways that are limited to what you can conceive of. Well, I'll agree to that. But I still don't see that you're then restricting it to you, as such.
Further, what is the point of this question? Clearly you can conceive of a universe in which you don't exist; as I say, you're doing it just fine above. So what do you want? You want to be able to visualize a room in another universe, one in which you don't exist... without having a viewpoint from which to visualize it? But then what would "visualize" mean? Surely the act of visualizing itself implies a viewpoint? I suppose you might want to visualize from all possible viewpoints at once, and you're disappointed that you cannot. Well that certainly is a human limitation, and as such it is a human viewpoint that we must utilize to visualize a room, or whatever. So in that very general sense you're correct. I guess you're going to have to find a computer to do your non-human visualizing for you.
Steven Ravett Brown
Your problem is exactly Kant's problem. So here you have a focus at once for your endeavours. Now I or another respondent could write at length on what you want to know, without soon getting to the bottom of it. But assuming that it is an issue which really troubles you, I would suggest that you sidestep all second-hand accounts and go straight for the Critique of Pure Reason. It is by no means as hard as often made out; in fact, on my view it is a model case of sober philosophical writing and certainly no more difficult to read than Bertrand Russell (who would not be pleased to hear me say this!). Nonetheless you may get stuck because there are inevitable historical associations to absorb, and to help you overcome these hurdles, let me recommend Sebastian Gardner's book on the Critique in the Routledge Guidebooks. Alternatively, Brian Magee has written a fine book on his own travails as a youth, very much of the same kind as your's; and how later his discovery of Kant changed his outlook on life and philosophy. This might be even better for you, given the similarities. The book is called Confessions of a Philosopher. Wishing you the best and that your argosy proves a happy and challenging adventure!
I now have a request to make of you. Your question is quite indiscriminate in its usage of the term 'sense data'. You're not to blame for it; it is a common fault. All the more reason to fix it! What you call 'sense data' are, in fact, sensa, which are the impressions received and processed by your nervous system, brain, perception, cognition etc. Sense data, on the contrary, are unformed stream of impressions which do not make it to any of these processing units. In other words: we are bombarded every second of time by millions of 'sense data', but what we then actually see, hear, touch etc. are 'sensa'; the 'data' are the rejects. I wish this elementary distinction was more readily observed. In my reading of the literature I have observed massive amounts of confusion arising precisely from its non-observance. So you can help! Make the point whenever you're asked to write or speak on the subject.
I get your point.
But in one aspect I don't believe you. I think you can very well imagine what people are saying about you when you're not there. That's a favourite pastime. Now serious: imagining a universe without yourself can be done in two ways.
1. Focusing on things that don't concern your personal presence (like gossip)
2. Focusing on things that need input of yourself
The second option acquires your presence. So it is impossible to imagine it without yourself.
It's like trying to imagine yourself doing an exam without being there. I don't know the definition of it, but to me that seems a 'contradictio in terminus'
I think I see what your getting at kind of like Sartre's claim that you can't imagine your own funeral because your already there? I guess most of the problem revolves around how we imagine these situations. Visually imagining is hard because you can't escape your own first-person perspective like in Sartre's example there is a sense in which your there. But suppose we just represent the situation linguistically: we tell a story, about a world in which all that exist are gorillas that listen to classical music (which they've come up with) and eat purple asparagus. Now, I certainly don't exist in that world I've stipulated that but I have succeeded in representing that state of affairs but the simple method of Kripkean specification (after Saul Kripke) whereby we simply stipulate what's going on that world and in doing so we represent that world. As long as that world is not inconsistent (i.e there are no logical contradictions) then that world is logically possible. I suppose that if we try and visually imagine that world that I always represent it from my viewpoint but like i've suggested, representation doesn't stop with our visual modalities.
The short answer is, I think, that sense data (if there are such things) don't have to be any particular person's sense data. It is the Idealist's mistake that all sense data must be his own sense data. Bishop Berkeley was (as might be expected) particularly bedevilled by this error.
Music is perhaps one of the most influential things on people and society. I find it difficult to understand how it works (i.e.: the source and evolution of it into what is today). My friend and have been arguing about this question: What would define music's evolutionary pattern. We feel this could define music today and its future through an answer to this question.
For example, I said music is sort of a ray that increases its width as it evolves.
Coincidentally I've just written a paper on a related subject, so I'm kind of 'hot' with it. But your question really requires a book-length response, you will therefore forgive me if I just mention a few crucial facets and leave you to research what else may need to be discovered.
1. Like most advanced brain functions, the auditory cortex is connected to several major 'processing sites'. Consider that we have to be able to recognise the direction from which a sound comes, to distinguish if the sound might indicate danger, to decipher grunts, cries, squeals as well as words, and of course to recognise some sounds as music. Now we should distinguish the last two items from the rest, because they only make sense in a context of a mind-like intelligence.
2. This is obvious with words, for although some animals can be conditioned to 'understand' words, they remain to them simply differentiated sounds, i.e. signals. It is vain to suppose an animal could make anything of the phrase 'truth is beauty', because this sort of thing the extraction of meaning from sounds-as-words is a mind's prerogative.
3. Likewise with music: some animals respond to the incidence of harmonious sound frequencies, but again it requires a mind to discriminate an intended communication, in short, to discern structure in these strands.
4. Cognition (i.e. the transformation of signals into semantic packets), however, takes place elsewhere than the auditory cortex: in the left hemisphere for words, in the right hemisphere for music; the reason being (it is supposed) that words require analysis, music synthesis; and this respectively happens to be the division of competence between the hemispheres.
5. Now here comes the really difficult part. Some time in the far distant history of hominids, the genetic structure for all this was laid down. Language was simple then, probably just a few dozen mostly monosyllabic words, while music might at first have been nothing more than the sing-song type of aural gesturing which we still do now (when we ask a question, we raise our pitch; when we protest, we descend a fifth; glee goes up a third etc etc). Occasionally drumming may have been added. Simple beginnings, but from the start with appropriate 'cognitive linkages' which, as human communications, would have been powerfully imprinted so that for all future time to come, the human brain would be enabled to discriminate between molecular vibrations modulated by tongue and lips (and later larynx) and those vibrations emanating still from vocal cords, but without or only little modulations. (Let me note in passing how little has changed. An orchestra today still comprises in the main instruments deputising for vocal cords and open cavities, namely strings, reeds and brass, while the percussion also retains its authentic function. Probably therein you'll find one reason why electronic music strikes us as 'unnatural').
6. From the foregoing you should have no difficulty in keeping the sensory and cognitive facets of music each in their own place. Some sounds are inherently 'beautiful' because they caress the nerves in the same way as a gentle stroke on the arm or a soft kiss; and in recent centuries the discovery of chromatic harmony and the refinements in instrumental production have added a new dimension to this indubitable pleasure. There is no real mystery here, as my comparison with a caress indicated; the mechanical detail is relatively well-known and not very interesting philosophically (unless you happen to be intrigued by physiology, as I confess I am).
7. The more important aspect of music is therefore (as you suggested) its tremendous influence on mood, and through that agency, on our mental and even spiritual well-being (or ill-being!). Now a lot of music, classical as well as popular, exerts mostly a visceral impact on our nerves, so this function is rarely more sophisticated than other sensory and sensual transactions, and there is a problem here. Because the mind is affected by its structural perception of these sounds as music, it reacts, and if the music is cheap, aggressive, violent, vicious as a lot of it happens to be, stress results. So in our world of incessantly piped and manipulated music, a great deal of social harm is done by the indiscriminate pouring out of this stuff over the public media. Strangely enough, this goes hand in hand with the peculiar fact that to many people, music is a surrogate religion, a surrogate narcotic and so forth indications of heavy dependence and craving, which suggests a universal perception of some deep secret woven somehow into the fabric of music that demands endless repetition as a means of getting closer to it. Now you mentioned 'evolutionary pattern': although it is not the path to a complete answer, it will serve to illuminate significant aspects; and so I will latch onto this and give you one reading of an evolutionary trail that has a pretty high degree of plausibility.
8. Have you ever been caught alone in an abandoned building or a forest on a pitch black night? Have you noticed how suddenly your sense of hearing becomes super-acute, how it enables you to navigate by locating objects and obstacles by the slightest sound, from the echo of your breathing to the cracking of a dried leaf, things you would never notice in the ordinary course of living? Well, among the hominids I mentioned earlier, this would have been a common, indispensable faculty. And of course, you would bring all your fears, your fright and apprehension, your determination and courage, to bear on the situation, and you would soon learn to distinguish the swoop of an owl's wings from the sniff of a wolf. You might like to elaborate such a scene, or many of them, in your imagination in order to appreciate how rapidly and kaleidoscopically your mood would change in the course of just a few minutes as you fight your way to freedom and safety. Now many, indeed innumerably many, of these subtle distinctions among sounds would have become (through cognitive linking and then genetic transmission) embedded in our species profile as a permanent resource of aural analysis, enabling us to recognise instantaneously the structural features of these molecular vibrations, as well as their significant mood associations: and now the crucial element in this theory is that these aural images, being a permanent repertoire, can be stimulated 'by proxy', by evocation and imitation, as similarly you can be inspired to feelings of terror, pity, love, excitement by just watching a movie. The avenue to this type of evocation of aural percepts is, of course, music.
9. So the 'deep secret' I spoke of is the hidden store of millennia of evolutionary travails and experiences of ancient hominids in their ascent to full humanness. Over the course of hominids evolution, these experiences would have amassed a considerable staple of functional sonic stimuli (I call them 'experience percepts'), and because they reflect each of them something utterly basic and fundamental to what it means to be a human being, the mood associations they evoke and stimulate when we play or listen to music are often of the type that strikes a very deep chord in us. But you can also see from this, I think, that ignorant manipulation is apt to have disastrous consequences. We have become very sophisticated since then; and societal living today has alienated us so much from the world of nature that we hardly recognise the difference any more between what is 'natural' and what is artificial. We have lost touch with the impact musical sounds have on our psyche, and are therefore unable to distinguish good from bad, good from evil, unless we spent years on it in a private endeavour to get back to these roots. This is a very recent phenomenon. For instance, if you read the poems of Tyrtaeus, you may be startled to find that he castigates the Spartan youths for tuning their instruments in (say) the lydian instead of the aeolian mode, recognising that one of these is extremely detrimental to their martial spirit. This is absolutely indiscernible to us today; it is a sensitivity long gone. But that power is still there, because it is power of the mind. We today just don't make enough of an effort any more to kept that flame truly alive.
If you wish to pursue some of these thoughts on your own, I can recommend a good book to start on: Music, Brain and Ecstasy by Robert Jourdain. The author is a musician with scientific training. Not much philosophy to be found in his pages; but another sorry chapter in our general delinquence in respect of music is that very few philosophers have written knowledgeably enough on music to qualify as real philosophy. A notable exception is Susanne Langer, whose books Philosophy in a New Key and Form and Feeling contain important chapters on music. Finally there is a book by Donald Merlin, Origins of the Modern Mind, which is not concerned with music at all, but enables you to study some of the evolutionary factors relevant to the mind in considerable depth. But to read this with profit, especially if music is your priority, you need to do a lot of independent thinking while the author talks to you, so this is perhaps a book to keep on the reserve list for when you have reached a relatively advanced stage in your studies.
I was wondering what religion and philosophy have in common? and also what makes them different from each other. You see this is my 1st year studying religion and theology, and I'm very confused!
I was also wondering, between religion and philosophy, what is your opinion about which one is more necessary in the new century?
Religion is often wrongly associated with extrinsic factors like institutional setup or forms like worship, Holy texts, but religion is basically about ideas. Out of its ideas its institutions its behaviour and its history flow. Religion and philosophy are both about ideas. But they are about ideas in different ways. Broadly, religion is about ideas qua God; philosophy is about ideas qua thinking; and of course philosophy may take up thinking in more limited ways which do not recognise the universality and authority of reason, but subject reason to ideology and the like; but the same happens in religion, one may become 'pharisaic' about it. But religion has to still think about God and thinking in philosophy quickly comes to recognise the universalising power of reason. So while both philosophy and religion basically have to do with ideas, they have to do with ideas in different ways, only these different ways soon lead back toward the other again. Philosophy and religion can't get away from each other. Modern philosophy (since the 18th century in particular) is avowedly secular and therefore it tries to think in a way which will steer clear of religion (of ultimate notions such as love and truth). However, modern reasoning in ethics (of what is ordered to the good) steers even modern philosophy back toward questions of morality (of what is right) and thereby into the central province of religion.
Philosophy without religion is trivial and vain. Religion without philosophy is ignorant and often malignant. In the new century religion needs to rediscover its sister Philosophy, and Philosophy needs to soften her heart to the ideas precious to religion and join forces with it.
Matthew Del Nevo
I believe that you are contributing to your own confusion by trying to put a barrier between religion and philosophy. Religion is a philosophy and there is such a subject as 'Philosophy of Religion.' There is also a related topic called 'Moral Philosophy.' Philosophy asks questions like; Can we prove there is a God? Can it be shown that fundamental religious beliefs are true? Can it be shown that fundamental religious beliefs are possible? Are fundamental religious beliefs justifiable? Was the universe designed? Is it reasonable to hold fundamental religious beliefs? Are there beliefs which do not require justification? Is it a mistake to ask for justification of fundamental religious beliefs? Is religious belief possible?
As you will be aware, Theology is the study of God, religion and revelation. The difference between philosophy of religion, as briefly indicated above, and the topics of theology is that the latter are part of a philosophy which accepts by faith the existence of a god, and backs this up by a doctrine of beliefs which calls upon witness, prophets, representatives of God on earth, etc.. Religion unwittingly involves another facet of philosophy called 'Dualism,' which recognises a material body linked to a separate mind or soul, in most religions the soul is believed to survive the death of the material body. This often requires another belief, which could be argued has a metaphysical basis, and that is the notion of a location for the soul after the death of the physical body. In the christian religion this is called Heaven.
We could say that religious believers recognise the philosophical questions the answers to which,in a way, can either threaten or support their faith and beliefs. However, a conviction of the truth and authenticity of their position is sufficient to ward off any threat, and is sufficient to provide its own supportive arguments. Seen as a philosophy in its own right a major universal religion like Christianity is a powerful and intricate conceptual structure, based on an alleged source of divine revelation, the Bible. There is no argument within the christian religion regarding authenticity of the texts, differences only arise with regard to their interpretation.
Your second question about the necessity of philosophy and of religion in the new century depends on what you mean by necessity. To ask whether one is more necessary than the other is, to my mind, a bit like asking whether jam or marmalade is more necessary at at breakfast time. It is a simple matter of choice. In my personal opinion both have always been needed. I am not sure why you should single out the new century to favour one or the other. Unless, looking at it pragmatically you contribute to the general notion that religion is, and has been for some considerable time, on the decline. This seems true with regard to the christian religion, the general view is that churches are emptying rapidly. However, this is offset somewhat by the increased interesting New Age religions, but this is a subject for a separate debate.
There is a general feeling that the world is becoming more secular, seen in a swing towards material interests, and a corresponding swing away from spiritual consciousness. There is less dependence on the church for guidance: births, marriages and deaths are seen to involve the church less and less. Religion is no longer a foundation for the law of the land, it no longer constitutes a deterrent for law-breakers, nor does it provide a basis for accusation and punishment. The steady collapse of, at least, the christian religion has to some extent undermined moral and ethical persuasion.
There is much to say for and against religion, but in view of the secular shift and what seems an unhealthy increase in material ambition, I for one would certainly welcome some sort of religious revival. This, of course, is where philosophy can be very valuable in keeping a focus on religious and moral concerns, ironically, moral debate does not require a religion on which to base its tenets.
I have been reading Bertrand Russell's Introduction to Mathematical Philosophy, and I am stuck on his discussion of Frege's definition of the concept of number.
As a visual example, Russell talks about putting things into bins according to the relation of similarity. For example, I note that there is a one-to-one correspondence between my socks and my feet. So, I should put my pair of socks and my pair of feet in the same bin. In this bin, we can also put my hands, my gloves, my friend Eddie's hands, my friend Jenny's eyes, each married couple, and in fact any collection that comes in a pair. This bin will be, as Russell says, a collection with an infinite number of members, and each of these members is a collection with 2 members. We label (define) this bin as the number 2.
So here is where I start getting confused...Russell defines a number as "the set of all classes that are similar to the given class". (Here class essentially means 'collection'.) For an example, the number 2 is defined as the class of couples. I think my confusion is over what Russell means here by "the given class". He phrases it another way "The number of a class is the class of all those classes which are similar to it." What is meant by "it"? Which class is "it" referring to?
I am trying to sort this definition out in terms of the bins analogy. We assembled bins filled with collections that are similar to each other, and labeled them 'two' or 'three' or whatever the case may have been. But Russell then defines the number of the bin as the collection of the collections that are similar to the bin, not to each other. My confusion is that the bin has an infinite number of members, so its members are not similar to it, but to each other. (For example, the number of the bin of couples is the set of all couples, and there are an infinite amount of couples. They are similar to each other, not to the whole bin.) It seems to me that this definition of number leads to every number being infinite.
I think that the key to my understanding of this is the point which we define the number 2 to be the bin containing all couples. It seems that the class of all couples is not the same as the number 2. (for example, that bin has infinitely many members). Russell says essentially that of course defining 2 as the class of couples feels strange at first, but this strange feeling goes away. The bin containing all couples is a certainty whereas the number 2 is a "metaphysical entity about which we can never feel sure that it exists". Therefore it becomes natural to deal instead with the class of couples. I think my problem might be that the strange feeling has not gone away yet, and I could use some further discussion to help see why it should.
Well, notice that on p. 18 of Russell, B. Introduction to Mathematical Philosophy (London: George Allen & Unwin 1930) he states, "the number of a class is the class of all those classes that are similar to it". So first, you must be very very careful of your terminology here. It's not the set. Second, class does not mean "collection", and that is your basic problem. Russell very specifically states that this is incorrect; see p. 12, for example, where he says that he will speak of a "class" instead of a "collection".
The bin containing all couples has this similarity between couples: they all have two members. That is their one similar characteristic: that of having two elements. That one characteristic holds over an infinite number of specific instances, and that is the point of Russell's conception of the class.
So one might say, employing Russell's intensive definition (p. 12), that the "defining property" of the infinite-sized class (p. 13: "a class and a defining characteristic of it are practically interchangeable") of things with two members: couples, is twoness, which is the class-idea, the "number": two. The "bin" is precisely that defining property, no more and no less. Therefore, the class: couples: twoness: two; is precisely identical with that bin, and with the number two.
I mean, you're getting lost in the details. It's just a way of saying, "What do all sets of two things have in common? Hey, there are two of them! So we'll define the number two by just saying: what they have in common is that number." That's really it. Really. The only confusing thing is that Russell is taking that as a definition of number, which sort of turns things around from the normal way of thinking of it, which is: the number two describes that there are two things. He's just saying, no, it doesn't do that, what's really happening is that we get the number from our intuition, if you want to think of it that way, that what all those things have in common is that there are two of them. So realizing that we have that intuition of number after seeing all the couples is the "strange feeling", because we usually think that the number is first, as a description. You see?
Steven Ravett Brown
Apart from being able to feed yourself and build shelter what's the advantage of knowing anything?
You seem to be assuming that the only things that we desire are very basic needs such as food and shelter. However most humans have far more complex needs and desires than this. Love, companionship, wealth power, security etc are all powerful motivators. Knowledge itself is seen as an intrinsically good thing by many including myself.
True beliefs about the way the world is tend to be of greater value to us than false beliefs in interacting with the world than false beliefs and therefore help us to satisfy those other desires we might have.
"Food" and "shelter" are nice vague words, aren't they? Now, just what do they mean? What kind of shelter do you want? You want clothes? What are "clothes"? Animal skins? Ok, how do you kill an animal and skin it? What do you do to the skin to make it into "clothes"? Let's see... you kill an animal with... a spear, right? Ok, how do you make a spear? You "cut" a tree... with what? You put a point on the tree branch... with what? Your teeth? No... a knife? How do you make a "knife"? Well, maybe you just hit the animal with a club, how about that? Ok... what animal? Where do you find it? Well, let's say you've found it and hit it... and you just rip the skin off... then what? You just drape the skin, all bloody and dripping, over yourself? And how long do you think it would take to rot? Whoops, I guess you have to treat it somehow... now, how do you do that?
Well we haven't even gotten past clothes yet, pretty crude ones, and we're sort of stuck from our lack of "knowledge", aren't we... I guess we have to learn how to make knives, to tan skins, that sort of thing, right? Now, once we've learned to make a knife to kill an animal... guess what, we can use it for other things! Like killing people... like cutting wood, if we make it big enough... and gosh, once we cut some wood, we can make a boat, a house... but to make a boat, we have to do that thing: "learn", you know... like, how to make a rudder, a mast, maybe even sails... and after all it would be nice to know how to navigate just a little, wouldn't it? Maybe make a net to catch some fish? But making a net means learning again... about knots, about making rope... it just never ends, does it. Once you learn how to make rope, then you can tie all sorts of things, can't you... I mean, a little fish, what's the harm in that? It makes some variety with all the meat we've been hitting with our clubs, right? Or are we using knives yet? Oh, by the way, how do we teach all this stuff to our kids... oh oh... we have to invent "writing"... oh dear, now it really starts, doesn't it.
I guess we also want "food" like vegetables and stuff, right? But that's just more to learn... plowing requires a plow... now what's that? How do you make one? How do you use one? You want to dig a hole... but that needs something like a shovel, and we don't even know how to mine metals yet, much less smelt them... so I guess we need wooden shovels... now how do you make one of those without a metal knife? Well you could chip stone into one, I guess... or use a stone knife to make one... how do you make a stone knife, anyway?
So "feeding yourself" and "building shelter" require enormous amounts of accumulated knowledge, if you want anything resembling what you're used to. You want to go into the wilderness and live off the land? Hey, sure, just don't forget where your axe came from... an iron mine, a smelter, a mold, etc... all requiring extremely sophisticated technology, supported by all sorts of infrastructure, technological and economic. Those nice warm clothes, woven on a loom from harvested cotton... the loom built from wood and metal, an accumulation of thousands of years of technology, the cotton grown with plows... even the sack you stuff the cotton balls into is woven, isn't it. Your leather boots... tell me, how do you make boots? Bootlaces? Boot soles? What if your boots are synthetic? Oboy. And all that knowledge can be used for... food, shelter, clothes, transportation. And maybe even a bit of fun now and then... is that so bad?
But maybe what you want is to live like the Native American Indian... noble and free, right? Well, noble, anyway... their lives were constrained by unbreakable customs... not what we'd call free. Well, you could break them... and die. Or get an infection... and die; sick... and die; injured... and die. Or maybe, if you're very very lucky, do ok until you kill off all the buffalo, the way the Native Americans' ancestors killed off the mammoths. Yes, they did. And starved, many of them. Well, there's always the nearest war, instead of TV... good entertainment, slaughtering your neighbors... very highly thought of in those days. Unless you were the ones getting slaughtered, anyway.
Now where were we... oh yes... the advantage of knowing things beyond "food" and "shelter"... you mean, like medicine?
Steven Ravett Brown
Is it just me or is everything ever written is philosophy completely obvious? If you have the ability reason you come up with the same answers as everyone else in history. Every time I read something new, the only thing I seem to learn is that someone else thought that way before me. Descartes may be right on the money with his wax but who really cares, tell me something I don't know. I am looking for someone terse, who can either enlighten me or if not possible least confirm my own findings.
It is one thing to read Thus Spoke Zarathustra which is at least quotable, but something like Beyond Good and Evil cannot be segregated this way, which means it has to be analyzed (boring!) and by the time your done all you can really say is Nietzsche is an idiot who talks to much about what isn't as opposed to what is. At least Machiavelli takes a stand, although his stand may be wrong. It seems like Socrates, Confucius, and Sun-Tzu are the only people original and nobody has yet added or taken away from their findings. Perhaps I should stick to reading Copleston and study philosophy as history, instead of a means of mental expansion. Anything you can give me to renew my love of thought would be greatly appreciated. But my question still remains: Is there or has there ever been anything left to discover in this field or has it all been subconsciously innate to the logical mind?
"Is it just me or is everything ever written is philosophy completely obvious? If you have the ability reason you come up with the same answers as everyone else in history."
That is clearly false; go to any library and you will find different viewpoints and opinions.
"Every time I read something new, the only thing I seem to learn is that someone else thought that way before me."
If it's "new", then it can't be the same, can it. So you've already contradicted yourself.
"Descartes may be right on the money with his wax but who really cares, tell me something I don't know. I am looking for someone terse, who can either enlighten me or if not possible least confirm my own findings."
Philosophy is not TV soundbites. I'll tell you what; find a "terse" writeup of Russell & Whitehead's "Principia". If what you want is to be spoonfed ideas then MTV is a great place to look. Not philosophy.
"It is one thing to read Thus Spoke Zarathustra which is at least quotable, but something like Beyond Good and Evil cannot be segregated this way, which means it has to be analyzed (boring!)..."
Oh dear, analyzing is "boring". Well what can I say... we philosophers are a boring lot, aren't we. Sitting around all day, "analyzing", "thinking"... you know, those boring things.
"...and by the time your done all you can really say is Nietzsche is an idiot who talks to much about what isn't as opposed to what is."
Yes, he was such an idiot... I guess all the well-read, boring people who write so much about him are also idiots, and the people who read them are too... well, I guess everyone, with perhaps one exception, is an idiot.
"At least Machiavelli takes a stand, although his stand may be wrong."
Oh dear... but to find out if it is wrong, you'll have to... analyze, won't you.
"It seems like Socrates, Confucius, and Sun-Tzu are the only people original and nobody has yet added or taken away from their findings."
Yes, you're absolutely correct. All the thousands of books and articles written since them are utter garbage, worthless trash, total idiocy. Please, pay them no attention.
"Perhaps I should stick to reading Copleston and study philosophy as history, instead of a means of mental expansion. Anything you can give me to renew my love of thought would be greatly appreciated.|
Um... I hate to be the one breaking this to you, but "thought" involves "analysis". Yes, I know... boring, boring...
"But my question still remains: Is there or has there ever been anything left to discover in this field or has it all been subconsciously innate to the logical mind?"
Oh nothing at all. It's all in the subconscious, just like Socrates said.
Steven Ravett Brown
Man! all in all I disagree with nearly everything you say but then again I am a philosopher and if philosophers are good at anything its defending philosophy against the slander of others.
"It seems like Socrates, Confucius, and Sun-Tzu are the only people original and nobody has yet added or taken away from their findings."
Well, have you been reading recently? Here's a list a philosophers that you fail to mention, some old, most new, and the crazy things they say...
Heraclitus: You never step in the same river once(yes once)!!!
Thales: everything is water (???)
Graham Priest: there are true contradictions!
David Lewis: there are an infinite amount of concrete worlds!
Hilary Putnam: meaning ain't in the head!
Tyler Burge: belief ain't in the head!
Hartry Field: numbers don't exist!
Wittgenstein: the world is the totality of facts, not things!
Later Wittgenstein: 2 + 2 doesn't necessarily = 4!
Peter Singer: killing babies is ok!
Not to mention Quine, Carnap, Russell, Ayer, Kripke, McDowell, McTaggart, Kant, Hume, Blackburn and most philosophers ever. In fact nearly every philosopher ever has disagreed with nearly everyone else on nearly every topic that's why its so much fun!
So get a book, read it and enjoy the crazy world of philosophers.
Lastly, "Is there or has there ever been anything left to discover in this field or has it all been subconsciously innate to the logical mind?". Well, I certainly don't think that a) I have a subconscious and b) that that subconscious has every entertained the idea that everything is water...
A question about time.
I'm going to try and tackle your question in a roundabout way, because it's a pretty deep issue and I can't pretend to answer it definitively, only to throw out some ideas that may abet your understanding.
Let me give you three situations to compare:
a. The earth revolves around the sun once a year.
b. A needle is standing upright on its point.
c. On a certain day in 1606, Ben Jonson visited his friend Bill Shakespeare at the latter's lodgings for a drink and a chat. While they talked, Shakespeare would from time to time scribble a dozen or so lines of verse on a sheet of paper. Jonson later wrote that it was the culminating scene of Macbeth.
Before I turn to an explanation of what these items purport, let me first attend to the notion of "the womb of time", which is for all intents and purposes the core of your multilevel question, reduced to a neat metaphor. Now think about this for a moment: a woman is pregnant; she bears a growing embryo in her womb; and in the normal course of events this embryo would eventually see the light of day and claim full existence in 'real time'. Terribly suggestive imagery! It insinuates into our minds that a thing, to become an existent, temporally bounded entity, begins as an incomplete, rudimentary, seedlike fragment of thingness; that it starts at a definite moment, call it seeding or what you will, which puts a pattern of development in train with an issue to some extent pre-known and predictable.
So time, in this metaphor, is equated with a womb; but even though it is only a metaphor, the image does carry a significant freight of fallacy. It suggests that the universe 'seeds' time with its future contents once and for all, so that all objects and events are, in a sense, merely the specific occasions of their own realisation and that they are determined ahead of actually occurring. In the Bible this notion is expressed by another notorious metaphor: "It is written". Here the insinuation is that a "Book of Eternity" exists and the passage of time represents the pages being turned.
Both these metaphors have virtually universal status; they are accepted, believed and repeated ad infinitum as veritable truths, in other words as unexamined presuppositions of our thinking about time; and as such they infiltrate science, religion and philosophy as well. Yet I call the notion a fallacy, and I do this on the strength of a scrutiny of its broader meaning in various contexts, where I find that no account is taken of the elementary opposition between monodirectional, periodical and hierarchical principles in the organisation and propagation of events in the universe.
We might think of the "Book" as a program: it may assist with approaching the issues by comparison with a well-known technology. The big bang would then figure as the moment when the program is loaded and decompression inaugurated. Of course we must assume the program to be a self-starter, so that it begins its work without external triggers; but we must also assume that a kind of "residual electric potential" (gravity) comes in the same package with the expanding spatiotemporal shell, so that the elements released in the decompression will begin at once to interact with it and among each other.
At this stage it is worthwhile reminding ourselves that some very new ideas are actually quite old. All the way back in 1821, physicist Simon Laplace wrote of an ultimate intelligence capable of enumerating all the atoms in the universe and how, possessing a valid theory of gravitational attraction, this intelligence would therefore be in a position to calculate the future trajectory of each atom until its ultimate, terminal decay. Here is your idea again, couched in scientific terms. For such an intelligence, however (this is me speaking now) a concept of time would be meaningless, for the paths of this googolplex of atoms would be just a single immense but immobile and immutable graph. And this, at length, brings us back to my initial points.
Would this Ultimate Intelligence (UI) have any trouble with seeing Condition (a) through from beginning to end? None whatever, Laplace would say, and I have to agree. And to this day, physicists are inclined to keep agreeing; I spoke to one of their number only a few weeks ago, and he repeated this hypothesis to me and was very surprised to be told just how long ago it was first mooted!
But we are just coming to the crucial juncture: to a feature of this universe and the behaviour of its objects of which Laplace knew nothing. Laplace's UI could not cope with Point (b). Now this might raise eyebrows, but listen carefully. A needle standing on its point will obviously fall in line with one of its 360 degrees of angles but which? This again is an enigma with a long pedigree, for which a solution was worked out just before 1900 by Henri Poincaré (hence its diagrammatic representation is called 'Poincaré Section'). The solution was that the problem is insoluble! Given a 'fair' needle, i.e. without any bias, its support on a mere point creates an unstable equilibrium in which the 'wobble' of a single atom may influence the direction of its fall. But which atom? Well, even on a needle point there may be 100 million to choose from, but then you also need to find a reason why that particular atom wobbled. I think you'll now get the gist of the problem!
Let me apply a neat contemporary slogan to the situation: "It does not compute." But of course this translate exactly, word for word, into "It is not written."
Just for the heck of it, imagine that when it finally topples, the needle's bang on the table frightens the life out of a microbe, which goes running for its dear life . . . and suddenly you begin to realise that a lot of trajectories on the UI's graph are going to trail off in indeterminable wobbles of their own...
Actually this principle is so important, and yet so little appreciated, that it is apt to quote another example. Let's fire a bullet at a shop window, point blank and absolutely straightline. We'll assume, furthermore, that the convex end has been machined to absolute perfection and that the glass pane has a perfectly regular lattice structure. Now given these conditions, the bullet would be repelled! Why? Because (as an old philosophical principles states), in a perfect arrangement of elements, there must be a sufficient reason for any single atom to yield first; but lacking such reason, none does (this has been experimentally verified). The lesson here is that any action whatever relies on imperfections to facilitate the occurrence of actual events but what are imperfections other than more incalculable contingencies, more unwritten leaves in the Book?
And so, finally, to Point (c). We see here an agency at work, creating something new in rather unexpected circumstances. What this agency (Shakespeare) produced was not, however, a new arrangement of old atoms, but a web of ideas spun out of material which cannot be said to have any real existence at all certainly no trace of it would be detectable to Laplace's UI. For the written text, which might at first contradict me, is not after all the idea, but only its incidental token, that could easily have been replaced by Ben memorising the text. What a paradox! Humans think and put down their thinking on paper, but the moment another human picks it up, it's not the paper, but the thinking they reconstruct. Now where on UI's chart, do you suppose thought atoms might be represented?
All right: time for conclusions.
Point (a) covers what's known as 'determinism'. It applies, as we saw, to those features of the universe that are enumerable, calculable and mechanically predictable. The gross trends of such structures are relatively easy to foresee, because they are governed in the main by periodicity.
Point (b) brings the fine detail forward, which evidently has a latent influence on the trend of gross structures, and their intrinsic instability removes them from exact predictability. They are, however, predictable as mass points over some lengths of time, because in the main they are governed by hierarchical organisation.
Events of type (c) are strictly monodirectional and unrepeatable. It is also a characteristic of such events that they need not have any event-like or object-like consequences. Point (c), therefore, is the only one of the three that has any genuine bearing on the problem. For one can state as a general principle that the occurrence of just one event of Type (c) completely disqualifies the generality of deterministic principles and reduces their validity to the status of 'special instances'. Moreover, that same single occurrence puts paid to the notion of a "womb of time", for if the flow of time is thereby shown to be monodirectional, then obviously there is no further point in pursuing the image of a future in which Macbeth is already waiting for us. I suppose one easy way of comprehending this is, that we can calculate even today millions of different solar and stellar positions and work out (barring accidental intrusions of dark matter) what kind of window we have on the universe in 50 million years. But not even UI himself, lacking knowledge of thought atoms, could have predicted Macbeth until the day that it was actually written.