Combining Texts

All the ideas for 'Sweet Dreams', 'Why Propositions Aren't Truth-Supporting Circumstance' and 'Minds, Brains and Science'

unexpand these ideas     |    start again     |     specify just one area for these texts


10 ideas

15. Nature of Minds / B. Features of Minds / 5. Qualia / c. Explaining qualia
Obviously there can't be a functional anaylsis of qualia if they are defined by intrinsic properties [Dennett]
     Full Idea: If you define qualia as intrinsic properties of experiences considered in isolation from all their causes and effects, logically independent of all dispositional properties, then they are logically guaranteed to elude all broad functional analysis.
     From: Daniel C. Dennett (Sweet Dreams [2005], Ch.8)
     A reaction: This is a good point - it seems daft to reify qualia and imagine them dangling in mid-air with all their vibrant qualities - but that is a long way from saying there is nothing more to qualia than functional roles. Functions must be exlained too.
16. Persons / E. Rejecting the Self / 4. Denial of the Self
The work done by the 'homunculus in the theatre' must be spread amongst non-conscious agencies [Dennett]
     Full Idea: All the work done by the imagined homunculus in the Cartesian Theater must be distributed among various lesser agencies in the brain, none of which is conscious.
     From: Daniel C. Dennett (Sweet Dreams [2005], Ch.3)
     A reaction: Dennett's account crucially depends on consciousness being much more fragmentary than most philosophers claim it to be. It is actually full of joints, which can come apart. He may be right.
17. Mind and Body / C. Functionalism / 7. Chinese Room
Maybe understanding doesn't need consciousness, despite what Searle seems to think [Searle, by Chalmers]
     Full Idea: Searle originally directed the Chinese Room against machine intentionality rather than consciousness, arguing that it is "understanding" that the room lacks,….but on Searle's view intentionality requires consciousness.
     From: report of John Searle (Minds, Brains and Science [1984]) by David J.Chalmers - The Conscious Mind 4.9.4
     A reaction: I doubt whether 'understanding' is a sufficiently clear and distinct concept to support Searle's claim. Understanding comes in degrees, and we often think and act with minimal understanding.
A program won't contain understanding if it is small enough to imagine [Dennett on Searle]
     Full Idea: There is nothing remotely like genuine understanding in any hunk of programming small enough to imagine readily.
     From: comment on John Searle (Minds, Brains and Science [1984]) by Daniel C. Dennett - Consciousness Explained 14.1
     A reaction: We mustn't hide behind 'complexity', but I think Dennett is right. It is important to think of speed as well as complexity. Searle gives the impression that he knows exactly what 'understanding' is, but I doubt if anyone else does.
If bigger and bigger brain parts can't understand, how can a whole brain? [Dennett on Searle]
     Full Idea: The argument that begins "this little bit of brain activity doesn't understand Chinese, and neither does this bigger bit..." is headed for the unwanted conclusion that even the activity of the whole brain won't account for understanding Chinese.
     From: comment on John Searle (Minds, Brains and Science [1984]) by Daniel C. Dennett - Consciousness Explained 14.1
     A reaction: In other words, Searle is guilty of a fallacy of composition (in negative form - parts don't have it, so whole can't have it). Dennett is right. The whole shebang of the full brain will obviously do wonderful (and commonplace) things brain bits can't.
17. Mind and Body / E. Mind as Physical / 2. Reduction of Mind
Intelligent agents are composed of nested homunculi, of decreasing intelligence, ending in machines [Dennett]
     Full Idea: As long as your homunculi are more stupid and ignorant than the intelligent agent they compose, the nesting of homunculi within homunculi can be finite, bottoming out, eventually, with agents so unimpressive they can be replaced by machines.
     From: Daniel C. Dennett (Sweet Dreams [2005], Ch.6)
     A reaction: [Dennett first proposed this in 'Brainstorms' 1978]. This view was developed well by Lycan. I rate it as one of the most illuminating ideas in the modern philosophy of mind. All complex systems (like aeroplanes) have this structure.
17. Mind and Body / E. Mind as Physical / 3. Eliminativism
I don't deny consciousness; it just isn't what people think it is [Dennett]
     Full Idea: I don't maintain, of course, that human consciousness does not exist; I maintain that it is not what people often think it is.
     From: Daniel C. Dennett (Sweet Dreams [2005], Ch.3)
     A reaction: I consider Dennett to be as near as you can get to an eliminativist, but he is not stupid. As far as I can see, the modern philosopher's bogey-man, the true total eliminativist, simply doesn't exist. Eliminativists usually deny propositional attitudes.
18. Thought / B. Mechanics of Thought / 6. Artificial Thought / a. Artificial Intelligence
What matters about neuro-science is the discovery of the functional role of the chemistry [Dennett]
     Full Idea: Neuro-science matters because - and only because - we have discovered that the many different neuromodulators and other chemical messengers that diffuse throughout the brain have functional roles that make important differences.
     From: Daniel C. Dennett (Sweet Dreams [2005], Ch.1)
     A reaction: I agree with Dennett that this is the true ground for pessimism about spectacular breakthroughs in artificial intelligence, rather than abstract concerns about irreducible features of the mind like 'qualia' and 'rationality'.
19. Language / C. Assigning Meanings / 2. Semantics
Semantics as theory of meaning and semantics as truth-based logical consequence are very different [Soames]
     Full Idea: There are two senses of 'semantic' - as theory of meaning or as truth-based theory of logical consequence, and they are very different.
     From: Scott Soames (Why Propositions Aren't Truth-Supporting Circumstance [2008], p.78)
     A reaction: This subtle point is significant in considering the role of logic in philosophy. The logicians' semantics (based on logical consequence) is in danger of ousting the broader and more elusive notion of meaning in natural language.
19. Language / C. Assigning Meanings / 6. Truth-Conditions Semantics
Semantic content is a proposition made of sentence constituents (not some set of circumstances) [Soames]
     Full Idea: The semantic content of a sentence is not the set of circumstances supporting its truth. It is rather the semantic content of a structured proposition the constituents of which are the semantic contents of the constituents of the sentence.
     From: Scott Soames (Why Propositions Aren't Truth-Supporting Circumstance [2008], p.74)
     A reaction: I'm not sure I get this, but while I like the truth-conditions view, I am suspicious of any proposal that the semantic content of something is some actual physical ingredients of the world. Meanings aren't sticks and stones.