Combining Texts

All the ideas for '', 'What is innate and why' and 'Entity and Identity'

unexpand these ideas     |    start again     |     specify just one area for these texts


8 ideas

5. Theory of Logic / A. Overview of Logic / 1. Overview of Logic
If a sound conclusion comes from two errors that cancel out, the path of the argument must matter [Rumfitt]
     Full Idea: If a designated conclusion follows from the premisses, but the argument involves two howlers which cancel each other out, then the moral is that the path an argument takes from premisses to conclusion does matter to its logical evaluation.
     From: Ian Rumfitt ("Yes" and "No" [2000], II)
     A reaction: The drift of this is that our view of logic should be a little closer to the reasoning of ordinary language, and we should rely a little less on purely formal accounts.
5. Theory of Logic / E. Structures of Logic / 2. Logical Connectives / a. Logical connectives
Standardly 'and' and 'but' are held to have the same sense by having the same truth table [Rumfitt]
     Full Idea: If 'and' and 'but' really are alike in sense, in what might that likeness consist? Some philosophers of classical logic will reply that they share a sense by virtue of sharing a truth table.
     From: Ian Rumfitt ("Yes" and "No" [2000])
     A reaction: This is the standard view which Rumfitt sets out to challenge.
The sense of a connective comes from primitively obvious rules of inference [Rumfitt]
     Full Idea: A connective will possess the sense that it has by virtue of its competent users' finding certain rules of inference involving it to be primitively obvious.
     From: Ian Rumfitt ("Yes" and "No" [2000], III)
     A reaction: Rumfitt cites Peacocke as endorsing this view, which characterises the logical connectives by their rules of usage rather than by their pure semantic value.
9. Objects / A. Existence of Objects / 2. Abstract Objects / b. Need for abstracta
We need a logical use of 'object' as predicate-worthy, and an 'ontological' use [Strawson,P]
     Full Idea: There is a good case for a conservative reform of the word 'object'. Objects in the 'logical' sense would be all predicate-worthy identifiabilia whatever. Objects in the 'ontological' sense would form one ontological category among many others.
     From: Peter F. Strawson (Entity and Identity [1978], I n4)
     A reaction: This ambiguity has caused me no end of confusion (and irritation!). I wish philosophers wouldn't hijack perfectly good English words and give them weird meanings. Nice to have a distinguished fellow like Strawson make this suggestion.
9. Objects / D. Essence of Objects / 3. Individual Essences
It makes no sense to ask of some individual thing what it is that makes it that individual [Strawson,P]
     Full Idea: For no object is there a unique character or relation by which it must be identified if it is to be identified at all. This is why it makes no sense to ask, impersonally and in general, of some individual object what makes it the individual object it is.
     From: Peter F. Strawson (Entity and Identity [1978], I)
     A reaction: He links this remark with the claim that there is no individual essence, but he seems to view an individual essence as indispensable to recognition or individuation of the object, which I don't see. Recognise it first, work out its essence later.
18. Thought / B. Mechanics of Thought / 4. Language of Thought
If everything uses mentalese, ALL concepts must be innate! [Putnam]
     Full Idea: Fodor concludes that every predicate that a brain could learn to use must have a translation into the computer language of that brain. So no "new" concepts can be acquired: all concepts are innate!
     From: Hilary Putnam (What is innate and why [1980], p.407)
     A reaction: Some misunderstanding, surely? No one could be so daft as to think that everyone has an innate idea of an iPod. More basic innate building blocks for thought are quite plausible.
No machine language can express generalisations [Putnam]
     Full Idea: Computers have a built-in language, but not a language that contains quantifiers (that is, the words "all" and "some"). …So generalizations (containing "all") cannot ever be stated in machine language.
     From: Hilary Putnam (What is innate and why [1980], p.408)
     A reaction: Computers are too sophisticated to need quantification (which is crude). Computers can work with very precise and complex specifications of the domain of a given variable.
19. Language / F. Communication / 3. Denial
We learn 'not' along with affirmation, by learning to either affirm or deny a sentence [Rumfitt]
     Full Idea: The standard view is that affirming not-A is more complex than affirming the atomic sentence A itself, with the latter determining its sense. But we could learn 'not' directly, by learning at once how to either affirm A or reject A.
     From: Ian Rumfitt ("Yes" and "No" [2000], IV)
     A reaction: [compressed] This seems fairly anti-Fregean in spirit, because it looks at the psychology of how we learn 'not' as a way of clarifying what we mean by it, rather than just looking at its logical behaviour (and thus giving it a secondary role).