green numbers give full details.     |    back to list of philosophers     |     unexpand these ideas

Ideas of Daniel C. Dennett, by Text

[American, b.1942, Pupil of Gilbert Ryle at Oxford. Professor at Tufts University.]

1978 Brainstorms:Essays on Mind and Psychology
p.15? p.76 Theories of intentionality presuppose rationality, so can't explain it
     Full Idea: Intentional theory is vacuous as psychology because it presupposes and does not explain rationality or intelligence.
     From: Daniel C. Dennett (Brainstorms:Essays on Mind and Psychology [1978], p.15?)
     A reaction: Virtually every philosophical theory seems to founder because it presupposes something like the thing it is meant to explain. I agree that 'intentionality' is a slightly airy concept that would probably reduce to something better.
p.7? p.77 Beliefs and desires aren't real; they are prediction techniques
     Full Idea: Intentional systems don't really have beliefs and desires, but one can explain and predict their behaviour by ascribing beliefs and desires to them. This strategy is pragmatic, not right or wrong.
     From: Daniel C. Dennett (Brainstorms:Essays on Mind and Psychology [1978], p.7?)
     A reaction: If the ascription of beliefs and desires explains behaviour, then that is good grounds for thinking they might be real features of the brain, and even if that is not so, they are real enough as abstractions from brain events, like the 'economic climate'.
1981 True Believers
p.44 States have content if we can predict them well by assuming intentionality
     Full Idea: Dennett maintains that a system has states with representational content if we are able to predict its behaviour reliably and voluminously by adopting the intentional stance toward it.
     From: report of Daniel C. Dennett (True Believers [1981]) by Peter Schulte - Mental Content 5
     A reaction: Dennett himself seems happy to thereby attribute representational content to a chess-playing computer. This sounds like a test for content, rather than explaining what it is. Not promising, I think.
1984 Elbow Room: varieties of free will
§2.2 p.29 Awareness of thought is a step beyond awareness of the world
     Full Idea: The creature who is not only sensitive to patterns in its environment, but also sensitive to patterns in its own reactions to patterns in its environment, has taken a major step.
     From: Daniel C. Dennett (Elbow Room: varieties of free will [1984], §2.2)
§3.2 p.54 Foreknowledge permits control
     Full Idea: Foreknowledge is what permits control.
     From: Daniel C. Dennett (Elbow Room: varieties of free will [1984], §3.2)
§3.3 n14 p.58 Causal theories require the "right" sort of link (usually unspecified)
     Full Idea: In causal theories of knowledge and reference, the causal chain between object and thought must be of the "right" sort - the nature of rightness to be specified later, typically.
     From: Daniel C. Dennett (Elbow Room: varieties of free will [1984], §3.3 n14)
     A reaction: This is now the standard objection to a purely causal account of reference. Which of the many causal chains causes the meaning? Knowledge of maths is a further problem for it.
§4.1 p.79 The active self is a fiction created because we are ignorant of our motivations
     Full Idea: Faced with our inability to 'see' where the centre or source of our free actions is,…we exploit the gaps in our self-knowledge by filling it with a mysterious entity, the unmoved mover, the active self.
     From: Daniel C. Dennett (Elbow Room: varieties of free will [1984], §4.1)
     A reaction: I am convinced that there is no such things as free will; its origins are to be found in religion, where it is a necessary feature of a very supreme God. I don't believe for a moment that we need to believe in free will.
§4.2 p.82 I am the sum total of what I directly control
     Full Idea: Control is the ultimate criterion of the self: I am the sum total of the parts I control directly.
     From: Daniel C. Dennett (Elbow Room: varieties of free will [1984], §4.2)
     A reaction: This looks awfully like a flagrant self-contradiction, and I think it is. It seems pretty obvious that there is at least a distinction between the bit or bits that do the controlling, and the bits that get controlled.
§4.2 p.87 An overexamined life is as bad as an unexamined one
     Full Idea: The unexamined life may not be worth living, but the overexamined life is nothing to write home about either.
     From: Daniel C. Dennett (Elbow Room: varieties of free will [1984], §4.2)
     A reaction: Presumably he means a life which is all theory and no practice. Compare Idea 343.
§6.1 p.132 You can be free even though force would have prevented you doing otherwise
     Full Idea: If a brain implant would compel you to perform an action which you in fact freely choose, then you are free, but couldn't have done otherwise.
     From: report of Daniel C. Dennett (Elbow Room: varieties of free will [1984], §6.1) by PG - Db (ideas)
§7.1 p.155 Rationality requires the assumption that things are either for better or worse
     Full Idea: We must assume that something matters - that some things are for better and some things are for worse, for without that our assumed rationality would have nothing on which to get a purchase.
     From: Daniel C. Dennett (Elbow Room: varieties of free will [1984], §7.1)
     A reaction: It does seem that rationality wouldn't exist as an activity without some value to motivate it.
§7.3 p.170 Why pronounce impossible what you cannot imagine?
     Full Idea: You say you cannot imagine that p, and therefore declare that p is impossible. Mightn't that be hubris?
     From: Daniel C. Dennett (Elbow Room: varieties of free will [1984], §7.3)
§7.3 p.172 Can we conceive of a being with a will freer than our own?
     Full Idea: Can I even conceive of beings whose wills are freer than our own?
     From: Daniel C. Dennett (Elbow Room: varieties of free will [1984], §7.3)
1985 works
p.79 If mind is just an explanation, the explainer must have beliefs
     Full Idea: If something has beliefs only if something else is disposed to "treat it" (i.e. think of it) as though it does, then we seem at least to have an infinite regress of appeals to believers.
     From: comment on Daniel C. Dennett (works [1985]) by Georges Rey - Contemporary Philosophy of Mind 3.2.1
     A reaction: This sounds like a serious difficulty for behaviourists, but is not insurmountable. We need a community of interlocking behaviours, with a particular pattern of behaviour being labelled (for instrumental convenience) as 'beliefs'.
p.133 You couldn't drive a car without folk psychology
     Full Idea: Folk psychology is indispensable for driving a car, which would be terrifying if we didn't assume there were psychologically normal people behind the wheels.
     From: Daniel C. Dennett (works [1985]), quoted by Georges Rey - Contemporary Philosophy of Mind p.133 n35
     A reaction: Nice example. If someone is approaching you from the front on your side of the road, should you assume that they are 'psychologically normal'? Does psychology imply behaviour, or vice versa?
1988 Quining Qualia
p.55 Dennett denies the existence of qualia
     Full Idea: Dennett goes to the extreme of denying the existence of qualia altogether.
     From: report of Daniel C. Dennett (Quining Qualia [1988]) by E.J. Lowe - Introduction to the Philosophy of Mind Ch.3
     A reaction: I sympathise with Dennett. Once you know how physically complex and rapid a quale is (about nine billion connections, all firing continuously), the notion that it seems to be some new 'thing', while just being a process, seems fine. Like a waterfall.
1991 Consciousness Explained
2.4 p.37 Dualism wallows in mystery, and to accept it is to give up
     Full Idea: Given the way dualism wallows in mystery, accepting dualism is giving up.
     From: Daniel C. Dennett (Consciousness Explained [1991], 2.4)
     A reaction: Some things, of course, might be inherently mysterious to us, and we might as well give up. The big dualist mystery is the explanation of how such different substances can interact. How do two physical substances manage to interact?
5.3 p.126 It is arbitrary to say which moment of brain processing is conscious
     Full Idea: If one wants to settle on some moment of processing in the brain as the moment of consciousness, this has to be arbitrary.
     From: Daniel C. Dennett (Consciousness Explained [1991], 5.3)
     A reaction: Seems eliminativist, as it implies that all that is really going on is 'processing'. But there are two senses of 'arbitrary' - that calling it consciousness is arbitrary (wrong), or thinking that mind doesn't move abruptly into consciousness (right).
5.4 p.127 Perhaps the brain doesn't 'fill in' gaps in consciousness if no one is looking.
     Full Idea: Perhaps the brain doesn't actually have to go to the trouble of "filling in" anything with "construction" - for no one is looking.
     From: Daniel C. Dennett (Consciousness Explained [1991], 5.4)
     A reaction: This a very nice point, because claims that the mind fills in in various psychological visual tests always has the presupposition of a person (or homunculus?) which is overseeing the visual experiences.
7.2 p.173 Originally there were no reasons, purposes or functions; since there were no interests, there were only causes
     Full Idea: In the beginning there were no reasons; there were only causes. Nothing had a purpose, nothing had so much as a function; there was no teleology in the world at all. The explanation is simple: there was nothing that had interests.
     From: Daniel C. Dennett (Consciousness Explained [1991], 7.2)
     A reaction: It seems reasonable to talk of functions even if the fledgling 'interests' are unconscious, as in a leaf. Is a process leading to an end an 'interest'? What are the 'interests' of a person who is about to commit suicide?
7.2 p.177 Brains are essentially anticipation machines
     Full Idea: All brains are, in essence, anticipation machines.
     From: Daniel C. Dennett (Consciousness Explained [1991], 7.2)
     A reaction: This would necessarily, I take it, make them induction machines. So brains will only evolve in a world where induction is possible, which is one where there a lot of immediately apprehensible regularities.
8.1 p.228 The brain is controlled by shifting coalitions, guided by good purposeful habits
     Full Idea: Who's in charge of the brain? First one coalition and then another, shifting in ways that are not chaotic thanks to good meta-habits that tend to entrain coherent, purposeful sequences rather than an interminable helter-skelter power grab.
     From: Daniel C. Dennett (Consciousness Explained [1991], 8.1)
     A reaction: This is probably the best anti-ego account available. Dennett offers our sense of self as a fictional autobiography, but the sense of a single real controller is very powerful. If I jump at a noise, I feel that 'I' have lost control of myself.
9.2 p.262 All functionalism is 'homuncular', of one grain size or another
     Full Idea: All varieties of functionalism can be viewed as 'homuncular' functionalism of one grain size or another.
     From: Daniel C. Dennett (Consciousness Explained [1991], 9.2)
     A reaction: This seems right, as any huge and complex mechanism (like a moon rocket) will be made up of some main systems, then sub-systems, then sub-sub-sub.... This assumes that there are one or two overarching purposes, which there are in people.
11.4 p.338 In peripheral vision we see objects without their details, so blindsight is not that special
     Full Idea: If a playing card is held in peripheral vision, we can see the card without being able to identify its colours or its shapes. That's normal sight, not blindsight, so we should be reluctant on those grounds to deny visual experience to blindsight subjects.
     From: Daniel C. Dennett (Consciousness Explained [1991], 11.4)
     A reaction: This is an important point in Dennett's war against the traditional all-or-nothing view of mental events. Nevertheless, blindsight subjects deny all mental experience, while picking up information, and peripheral vision never seems like that.
11.4 p.342 Blindsight subjects glean very paltry information
     Full Idea: Discussions of blindsight have tended to ignore just how paltry the information is that blindsight subjects glean from their blind fields.
     From: Daniel C. Dennett (Consciousness Explained [1991], 11.4)
     A reaction: This is a bit unfair, because blindsight has mainly pointed to interesting speculations (e.g. Idea 2953). Nevertheless, if blindsight with very high information content is actually totally impossible, the speculations ought to be curtailed.
12.2 p.375 Light wavelengths entering the eye are only indirectly related to object colours
     Full Idea: The wavelengths of the light entering the eye are only indirectly related to the colours we see objects to be.
     From: Daniel C. Dennett (Consciousness Explained [1991], 12.2)
     A reaction: This is obviously bad news for naïve realism, but I also take it as good support for the primary/secondary distinction. I just can't make sense of anyone claiming that colour exists anywhere else except in the brain.
12.4 p.397 We can't assume that dispositions will remain normal when qualia have been inverted
     Full Idea: The goal of the experiment was to describe a case in which it was obvious that the qualia would be inverted while the reactive dispositions would be normalized. But the assumption that one could just tell is question-begging.
     From: Daniel C. Dennett (Consciousness Explained [1991], 12.4)
     A reaction: It certainly seems simple and plausible that if we inverted our experience of traffic light colours, no difference in driver behaviour would be seen. However, my example, of a conversation in a gallery of abstract art, seems more problematic.
12.5 p.402 If an epiphenomenon has no physical effects, it has to be undetectable
     Full Idea: Psychologists mean a by-product by an 'epiphenomenon', ...but the philosophical meaning is too strong: it yields a concept of no utility whatsoever. Since x has no physical effects (according to the definition), no instrument can detect it.
     From: Daniel C. Dennett (Consciousness Explained [1991], 12.5)
     A reaction: Well said! This has always been my half-formulated intuition about the claim that the mind (or anything) might be totally epiphenomenal. All a thing such as the reflection on a lake can be is irrelevant to the functioning of that specified system.
12.6 p.409 Visual experience is composed of neural activity, which we find pleasing
     Full Idea: All visual experience is composed of activities of neural circuits whose very activity is innately pleasing to us.
     From: Daniel C. Dennett (Consciousness Explained [1991], 12.6)
     A reaction: This is the nearest I can find to Dennett saying something eliminativist. It seems to beg the question of who 'us' refers to, and what is being pleased, and how it is 'pleased' by these neural circuits. The Hard Question?
13.1 p.418 The psychological self is an abstraction, not a thing in the brain
     Full Idea: Like the biological self, the psychological or narrative self is an abstraction, not a thing in the brain.
     From: Daniel C. Dennett (Consciousness Explained [1991], 13.1)
     A reaction: Does Dennett have empirical evidence for this claim? It seems to me perfectly possible that there is a real thing called the 'self', and it is the central controller of the brain (involving propriotreptic awareness, understanding, and will).
13.1 p.418 We tell stories about ourselves, to protect, control and define who we are
     Full Idea: Our fundamental tactic of self-protection, self-control and self-definition is telling stories, and more particularly concocting and controlling the story we tell others - and ourselves - about who we are.
     From: Daniel C. Dennett (Consciousness Explained [1991], 13.1)
     A reaction: This seems to suggest that there is someone who wants to protect themselves, and who wants to tell the stories, and does tell the stories. No one can deny the existence of this autobiographical element in our own identity.
13.1 p.418 We spin narratives about ourselves, and the audience posits a centre of gravity for them
     Full Idea: The effect of our string of personal narratives is to encourage the audience to (try to) posit a unified agent whose words they are, about whom they are: in short, to posit a centre of narrative gravity.
     From: Daniel C. Dennett (Consciousness Explained [1991], 13.1)
     A reaction: What would be the evolutionary advantage of getting the audience to posit a non-existent self, instead of a complex brain? It might be simpler than that, since we say of a bird "it wants to do x". What is "it"? Some simple thing, like a will.
13.2 p.421 Words are fixed by being attached to similarity clusters, without mention of 'essences'
     Full Idea: We don't need 'essences' or 'criteria' to keep the meaning of our word from sliding all over the place; our words will stay put, quite firmly attached as if by gravity to the nearest similarity cluster.
     From: Daniel C. Dennett (Consciousness Explained [1991], 13.2)
     A reaction: Plausible, but essentialism (which may have been rejuventated by a modern theory of reference in language) is not about language. It is offering an explanation of why there are 'similarity clusters. Organisms are too complex to have pure essences.
13.2 p.422 People accept blurred boundaries in many things, but insist self is All or Nothing
     Full Idea: Many people are comfortable taking the pragmatic approach to night/day, living/nonliving and mammal/premammal, but get anxious about the same attitude to having a self and not having a self. It must be All or Nothing, and One to a Customer.
     From: Daniel C. Dennett (Consciousness Explained [1991], 13.2)
     A reaction: Personally I think I believe in the existence of the self, but I also agree with Dennett. I greatly admire his campaign against All or Nothing thinking, which is a relic from an earlier age. A partial self could result from infancy or brain damage.
13.2 p.423 Selves are not soul-pearls, but artefacts of social processes
     Full Idea: Selves are not independently existing soul-pearls, but artefacts of the social processes that create us, and, like other such artefacts, subject to sudden shifts in status.
     From: Daniel C. Dennett (Consciousness Explained [1991], 13.2)
     A reaction: "Soul-pearls" is a nice phrase for the Cartesian view, but there can something between soul-pearls and social constructs. Personally I think the self is a development of the propriotreptic (body) awareness that even the smallest animals must possess.
14.1 p.431 "Qualia" can be replaced by complex dispositional brain states
     Full Idea: "Qualia" can be replaced by complex dispositional states of the brain.
     From: Daniel C. Dennett (Consciousness Explained [1991], 14.1)
     A reaction: 'Dispositional' reveals Dennett's behaviourist roots (he was a pupil of Ryle). Fodor is right that physicalism cannot just hide behind the word "complexity". That said, the combination of complexity and speed might add up to physical 'qualia'.
14.2 p.442 We can know a lot of what it is like to be a bat, and nothing important is unknown
     Full Idea: There is at least a lot that we can know about what it is like to be a bat, and Nagel has not given us a reason to believe there is anything interesting or theoretically important that is inaccessible to us.
     From: Daniel C. Dennett (Consciousness Explained [1991], 14.2)
     A reaction: I agree. If you really wanted to identify with the phenomenology of bathood, you could spend a lot of time in underground caves whistling with your torch turned off. I can't, of course, be a bat, but then I can't be my self of yesterday.
14.2 p.447 We can't draw a clear line between conscious and unconscious
     Full Idea: Even in our own case, we cannot draw the line separating our conscious mental states from our unconscious mental states.
     From: Daniel C. Dennett (Consciousness Explained [1991], 14.2)
     A reaction: This strikes me as being a simple and self-evident truth, which anyone working on the brain takes for granted, but an awful lot of philosophers (stuck somewhere in the seventeenth century) can't seem to grasp.
14.4 p.454 Conscious events can only be explained in terms of unconscious events
     Full Idea: Only a theory that explained conscious events in terms of unconscious events could explain consciousness at all.
     From: Daniel C. Dennett (Consciousness Explained [1991], 14.4)
     A reaction: This sounds undeniable, so it seems to force a choice between reductive physicalism and mysterianism. Personally I think there must be an explanation in terms of non-conscious events, even if humans are too thick to understand it.
p.376 p. We can bring dispositions into existence, as in creating an identifier
     Full Idea: We can bring a real disposition into existence, as in Dennett's case of a piece of cardboard torn in half, so that two strangers can infallibly identify one another.
     From: report of Daniel C. Dennett (Consciousness Explained [1991], p.376) by Stephen Mumford - Dispositions 03.7 n37
     A reaction: Presumably human artefacts in general qualify as sets of dispositions which we have created.
1994 Daniel Dennett on himself
p.238 p.238 Learning is evolution in the brain
     Full Idea: Learning is evolution in the brain.
     From: Daniel C. Dennett (Daniel Dennett on himself [1994], p.238)
     A reaction: This is a rather non-conscious, associationist view, connected to Dawkins' idea of 'memes'. It seems at least partially correct.
p.239 p.239 The nature of content is entirely based on its functional role
     Full Idea: All attributions of content are founded on an appreciation of the functional roles of the items in question.
     From: Daniel C. Dennett (Daniel Dennett on himself [1994], p.239)
     A reaction: This seems wrong to me. How can anything's nature be its function? It must have intrinsic characteristics in order to have the function. This is an evasion.
p.239 p.239 Like the 'centre of gravity', desires and beliefs are abstract concepts with no actual existence
     Full Idea: Like such abstracta as centres of gravity and parallelograms of force, the beliefs and desires posited by the highest intentional stance have no independent and concrete existence.
     From: Daniel C. Dennett (Daniel Dennett on himself [1994], p.239)
     A reaction: I don't see why we shouldn't one day have a physical account of the distinctive brain events involved in a belief or a desire
p.239 p.239 The 'intentional stance' is a way of interpreting an entity by assuming it is rational and self-aware
     Full Idea: The 'intentional stance' is the tactic of interpreting an entity by adopting the presupposition that it is an approximation of the ideal of an optimally designed (i.e. rational) self-regarding agent.
     From: Daniel C. Dennett (Daniel Dennett on himself [1994], p.239)
     A reaction: This is Dennett's 'instrumentalism', a descendant of behaviourism, which strikes me as a pragmatist's evasion of the ontological problems of mind which should interest philosophers
p.239 p.239 Biology is a type of engineering, not a search for laws of nature
     Full Idea: Biology is not a science like physics, in which one should strive to find 'laws of nature', but a species of engineering.
     From: Daniel C. Dennett (Daniel Dennett on himself [1994], p.239)
     A reaction: Yes. This is also true of chemistry, which has always struck me as minitiarised car mechanics.
1995 Darwin's Dangerous Idea
§1.1 p.21 Darwin's idea was the best idea ever
     Full Idea: If I were to give an award for the single best idea anyone has ever had, I'd give it to Darwin.
     From: Daniel C. Dennett (Darwin's Dangerous Idea [1995], §1.1)
1996 Kinds of Minds
p.162 Minds are hard-wired, or trial-and-error, or experimental, or full self-aware
     Full Idea: Dennett identifies a hierarchy of minds running from 'Darwinian' (hard-wired solutions to problems), to 'Skinnerian' (trial-and-error), to 'Popperian' (anticipating possible experience), to 'Gregorian' (self-conscious representation, probably linguistic).
     From: report of Daniel C. Dennett (Kinds of Minds [1996]) by John Heil - Philosophy of Mind Ch.5
     A reaction: Interesting. The concept of an experiment seems a major step (assessing reality against an internal map), and the ability to think about one's own thoughts certainly strikes me as the mark of a top level mind. Maybe that is the importance of language.
Ch.1 p.8 Most people see an abortion differently if the foetus lacks a brain
     Full Idea: If a fetus that is being considered for abortion is known to be anencephalic (lacking a brain), this dramatically changes the issue for most people, though not for all.
     From: Daniel C. Dennett (Kinds of Minds [1996], Ch.1)
     A reaction: A very effective point, as it is hard to see what grounds could be given for not aborting in this case. But the brain then clearly becomes the focus of why abortion is often rejected by many people.
Ch.1 p.17 What is it like to notice an uncomfortable position when you are asleep?
     Full Idea: What is it like to notice, while sound asleep, that your left arm has become twisted into a position in which it is putting undue strain on your left shoulder? Like nothing.
     From: Daniel C. Dennett (Kinds of Minds [1996], Ch.1)
     A reaction: A nice question, and all part of Dennett's accurate campaign to show that consciousness is not an all-or-nothing thing. As when we are barely aware of driving, innumerable things happen in the shadowy corners of thought.
Ch.2 p.68 The predecessor and rival of the language of thought hypothesis is the picture theory of ideas
     Full Idea: The ancestor and chief rival of the language-of-thought hypothesis is the picture theory of ideas - that thoughts are about what they are about because they resemble their objects.
     From: Daniel C. Dennett (Kinds of Minds [1996], Ch.2)
     A reaction: When you place them side by side, neither seems quite right. How can a mental state resemble an object, and how can an inner language inherently capture the features of an object? Maybe we lack the words for the correct theory.
Ch.2 p.73 We descend from robots, and our intentionality is composed of billions of crude intentional systems
     Full Idea: We are descended from robots, and composed of robots, and all the intentionality we enjoy is derived from the more fundamental intentionality of billions of crude intentional systems.
     From: Daniel C. Dennett (Kinds of Minds [1996], Ch.2)
     A reaction: A more grand view of intentionality (such as Searle's) seems more attractive than this, but the crucial fact about Dennett is that he takes the implications of evolution much more seriously than other philosophers. He's probably right.
Ch.3 p.81 Maybe there is a minimum brain speed for supporting a mind
     Full Idea: Perhaps there is a minimum speed for a mind, rather like the minimum escape velocity required to overcome gravity and leave the planet.
     From: Daniel C. Dennett (Kinds of Minds [1996], Ch.3)
     A reaction: Dennett rejects this speculation, but he didn't stop to imagine what it would be LIKE if your brain slowed down, and he never considers Edelman's view that mind is a process. Put the two together…
Ch.3 p.87 Maybe plants are very slow (and sentient) animals, overlooked because we are faster?
     Full Idea: Might plants just be 'very slow animals', enjoying sentience that has been overlooked by us because of our human timescale chauvinism?
     From: Daniel C. Dennett (Kinds of Minds [1996], Ch.3)
     A reaction: Delightful thought, arising from pondering the significance of the speed of operation of the brain. I think it is false, because I think high speed is essential to mind, and Dennett seems not to.
Ch.3 p.100 The materials for a mind only matter because of speed, and a need for transducers and effectors
     Full Idea: I think there are only two good reasons why, when you make a mind, the materials matter: speed, and the ubiquity of transducers and effectors throughout the nervous system.
     From: Daniel C. Dennett (Kinds of Minds [1996], Ch.3)
     A reaction: This sounds roughly right, because it gives you something between multiple realisability (minds made of cans and string), and type-type identity (minds ARE a particular material). Call it 'biological functionalism'?
Ch.3 p.101 There is no more anger in adrenaline than silliness in a bottle of whiskey
     Full Idea: There is no more fear or anger in adrenaline than there is silliness in a bottle of whiskey.
     From: Daniel C. Dennett (Kinds of Minds [1996], Ch.3)
     A reaction: Not exactly an argument, but a nice rhetorical point against absurd claims about identity and reduction and elimination. We may say that there is no fear without adrenaline, and no adrenaline in a live brain without fear.
Ch.4 p.128 Sentience comes in grades from robotic to super-human; we only draw a line for moral reasons
     Full Idea: 'Sentience' comes in every imaginable grade or intensity, from the simplest and most 'robotic', to the most exquisitely sensitive, hyper-reactive 'human'. We have to draw a line for moral policy, but it is unlikely we will ever discover a threshold.
     From: Daniel C. Dennett (Kinds of Minds [1996], Ch.4)
     A reaction: This is the only plausible view, if you take the theory of evolution seriously. We can even observe low-grade marginal sentience in our own minds, and then shoot up the scale when we focus our minds properly on an object.
Ch.5 p.159 Being a person must involve having second-order beliefs and desires (about beliefs and desires)
     Full Idea: An important step towards becoming a person is the step up from a first-order intentional system to a second-order system (which has beliefs and desires about beliefs and desires).
     From: Daniel C. Dennett (Kinds of Minds [1996], Ch.5)
     A reaction: Call it 'meta-thought'. I agree. Dennett thinks language is crucial to this, but the hallmark of intelligence and full-blown personhood is meta- and meta-meta-thought. Maybe the development of irony is a step up the evolutionary scale. Sarcasm is GOOD.
Ch.6 p.211 Concepts are things we (unlike dogs) can think about, because we have language
     Full Idea: A dog cannot consider its concepts. Concepts are not things in a dog's world in the way that cats are. Concepts are things in our world, because we have language.
     From: Daniel C. Dennett (Kinds of Minds [1996], Ch.6)
     A reaction: Dogs must have concepts, though, or much of their behaviour (like desperation to go for a walk, or to eat) is baffling. This is as good a proposal as I have ever encountered for the value of language. Meta-thought is a huge evolutionary advantage.
1998 Brainchildren
Ch.25 p.358 Unconscious intentionality is the foundation of the mind
     Full Idea: It is on the foundation of unconscious intentionality that the higher-order complexities developed that have culminated in what we call consciousness.
     From: Daniel C. Dennett (Brainchildren [1998], Ch.25)
     A reaction: Sounds right to me. Pace Searle, I have no problem with unconscious intentionality, and the general homuncular picture of low levels building up to complex high levels, which suddenly burst into the song and dance of consciousness.
Ch.25 p.362 That every mammal has a mother is a secure reality, but without foundations
     Full Idea: Naturalistic philosophers should look with favour on the finite regress that peters out without foundations or thresholds or essences. That every mammal has a mother does not imply an infinite regress. Mammals have secure reality without foundations.
     From: Daniel C. Dennett (Brainchildren [1998], Ch.25)
     A reaction: I love this thought, which has permeated my thinking quite extensively. Logicians are terrified of regresses, but this may be because they haven't understood the vagueness of language.
Ch.25 p.363 A language of thought doesn't explain content
     Full Idea: Postulating a language of thought is a postponement of the central problems of content ascription, not a necessary first step.
     From: Daniel C. Dennett (Brainchildren [1998], Ch.25)
     A reaction: If the idea of content is built on the idea of representation, then you need some account of what the brain does with its representations.
Ch.25 p.364 Maybe language is crucial to consciousness
     Full Idea: I continue to argue for a crucial role of natural language in generating the central features of consciousness.
     From: Daniel C. Dennett (Brainchildren [1998], Ch.25)
     A reaction: 'Central features' might beg the question. Dennett does doubt the consciousness of animals (1996). As I stare out of my window, his proposal seems deeply counterintuitive. How could language 'generate' consciousness? Would loss of language create zombies?
Ch.25 p.366 Philosophers regularly confuse failures of imagination with insights into necessity
     Full Idea: The besetting foible of philosophers is mistaking failures of imagination for insights into necessity.
     From: Daniel C. Dennett (Brainchildren [1998], Ch.25)
Ch.6 p.128 Maybe there can be non-conscious concepts (e.g. in bees)
     Full Idea: Concepts do not require consciousness. As Jaynes says, the bee has a concept of a flower, but not a conscious concept.
     From: Daniel C. Dennett (Brainchildren [1998], Ch.6)
     A reaction: Does the flower have a concept of rain? Rain plays a big functional role in its existence. It depends, alas, on what we mean by a 'concept'.
Ch.6 p.128 Does consciousness need the concept of consciousness?
     Full Idea: You can't have consciousness until you have the concept of consciousness.
     From: Daniel C. Dennett (Brainchildren [1998], Ch.6)
     A reaction: If you read enough Dennett this begins to sound vaguely plausible, but next day it sounds like an absurd claim. 'You can't see a tree until you have the concept of a tree?' When do children acquire the concept of consciousness? Are apes non-conscious?
Ch.6 p.129 Could a robot be made conscious just by software?
     Full Idea: How could you make a robot conscious? The answer, I think, is to be found in software.
     From: Daniel C. Dennett (Brainchildren [1998], Ch.6)
     A reaction: This seems to be a commitment to strong AI, though Dennett is keen to point out that brains are the only plausible implementation of such software. Most find his claim baffling.
2005 Sweet Dreams
Ch.1 p.19 What matters about neuro-science is the discovery of the functional role of the chemistry
     Full Idea: Neuro-science matters because - and only because - we have discovered that the many different neuromodulators and other chemical messengers that diffuse throughout the brain have functional roles that make important differences.
     From: Daniel C. Dennett (Sweet Dreams [2005], Ch.1)
     A reaction: I agree with Dennett that this is the true ground for pessimism about spectacular breakthroughs in artificial intelligence, rather than abstract concerns about irreducible features of the mind like 'qualia' and 'rationality'.
Ch.3 p.69 The work done by the 'homunculus in the theatre' must be spread amongst non-conscious agencies
     Full Idea: All the work done by the imagined homunculus in the Cartesian Theater must be distributed among various lesser agencies in the brain, none of which is conscious.
     From: Daniel C. Dennett (Sweet Dreams [2005], Ch.3)
     A reaction: Dennett's account crucially depends on consciousness being much more fragmentary than most philosophers claim it to be. It is actually full of joints, which can come apart. He may be right.
Ch.3 p.71 I don't deny consciousness; it just isn't what people think it is
     Full Idea: I don't maintain, of course, that human consciousness does not exist; I maintain that it is not what people often think it is.
     From: Daniel C. Dennett (Sweet Dreams [2005], Ch.3)
     A reaction: I consider Dennett to be as near as you can get to an eliminativist, but he is not stupid. As far as I can see, the modern philosopher's bogey-man, the true total eliminativist, simply doesn't exist. Eliminativists usually deny propositional attitudes.
Ch.6 p.137 Intelligent agents are composed of nested homunculi, of decreasing intelligence, ending in machines
     Full Idea: As long as your homunculi are more stupid and ignorant than the intelligent agent they compose, the nesting of homunculi within homunculi can be finite, bottoming out, eventually, with agents so unimpressive they can be replaced by machines.
     From: Daniel C. Dennett (Sweet Dreams [2005], Ch.6)
     A reaction: [Dennett first proposed this in 'Brainstorms' 1978]. This view was developed well by Lycan. I rate it as one of the most illuminating ideas in the modern philosophy of mind. All complex systems (like aeroplanes) have this structure.
Ch.8 p.177 Obviously there can't be a functional anaylsis of qualia if they are defined by intrinsic properties
     Full Idea: If you define qualia as intrinsic properties of experiences considered in isolation from all their causes and effects, logically independent of all dispositional properties, then they are logically guaranteed to elude all broad functional analysis.
     From: Daniel C. Dennett (Sweet Dreams [2005], Ch.8)
     A reaction: This is a good point - it seems daft to reify qualia and imagine them dangling in mid-air with all their vibrant qualities - but that is a long way from saying there is nothing more to qualia than functional roles. Functions must be exlained too.