Combining Texts

All the ideas for 'talk', 'What Numbers Are' and 'Minds, Brains and Science'

unexpand these ideas     |    start again     |     specify just one area for these texts


6 ideas

5. Theory of Logic / J. Model Theory in Logic / 3. Löwenheim-Skolem Theorems
Löwenheim-Skolem says any theory with a true interpretation has a model in the natural numbers [White,NP]
     Full Idea: The Löwenheim-Skolem theorem tells us that any theory with a true interpretation has a model in the natural numbers.
     From: Nicholas P. White (What Numbers Are [1974], V)
6. Mathematics / A. Nature of Mathematics / 4. Using Numbers / c. Counting procedure
Finite cardinalities don't need numbers as objects; numerical quantifiers will do [White,NP]
     Full Idea: Statements involving finite cardinalities can be made without treating numbers as objects at all, simply by using quantification and identity to define numerically definite quantifiers in the manner of Frege.
     From: Nicholas P. White (What Numbers Are [1974], IV)
     A reaction: [He adds Quine 1960:268 as a reference]
14. Science / C. Induction / 3. Limits of Induction
Maybe induction is only reliable IF reality is stable [Mitchell,A]
     Full Idea: Maybe we should say that IF regularities are stable, only then is induction a reliable procedure.
     From: Alistair Mitchell (talk [2006]), quoted by PG - Db (ideas)
     A reaction: This seems to me a very good proposal. In a wildly unpredictable reality, it is hard to see how anyone could learn from experience, or do any reasoning about the future. Natural stability is the axiom on which induction is built.
17. Mind and Body / C. Functionalism / 7. Chinese Room
Maybe understanding doesn't need consciousness, despite what Searle seems to think [Searle, by Chalmers]
     Full Idea: Searle originally directed the Chinese Room against machine intentionality rather than consciousness, arguing that it is "understanding" that the room lacks,….but on Searle's view intentionality requires consciousness.
     From: report of John Searle (Minds, Brains and Science [1984]) by David J.Chalmers - The Conscious Mind 4.9.4
     A reaction: I doubt whether 'understanding' is a sufficiently clear and distinct concept to support Searle's claim. Understanding comes in degrees, and we often think and act with minimal understanding.
A program won't contain understanding if it is small enough to imagine [Dennett on Searle]
     Full Idea: There is nothing remotely like genuine understanding in any hunk of programming small enough to imagine readily.
     From: comment on John Searle (Minds, Brains and Science [1984]) by Daniel C. Dennett - Consciousness Explained 14.1
     A reaction: We mustn't hide behind 'complexity', but I think Dennett is right. It is important to think of speed as well as complexity. Searle gives the impression that he knows exactly what 'understanding' is, but I doubt if anyone else does.
If bigger and bigger brain parts can't understand, how can a whole brain? [Dennett on Searle]
     Full Idea: The argument that begins "this little bit of brain activity doesn't understand Chinese, and neither does this bigger bit..." is headed for the unwanted conclusion that even the activity of the whole brain won't account for understanding Chinese.
     From: comment on John Searle (Minds, Brains and Science [1984]) by Daniel C. Dennett - Consciousness Explained 14.1
     A reaction: In other words, Searle is guilty of a fallacy of composition (in negative form - parts don't have it, so whole can't have it). Dennett is right. The whole shebang of the full brain will obviously do wonderful (and commonplace) things brain bits can't.