structure for 'Mind and Body'    |     alphabetical list of themes    |     unexpand these ideas

17. Mind and Body / E. Mind as Physical / 4. Connectionism

[mind is the sum of many associations/connections]

14 ideas
Could a cloud have a headache if its particles formed into the right pattern? [Harman]
     Full Idea: If the right pattern of electrical discharges occurred in a cloud instead of in a brain, would that also be a headache?
     From: Gilbert Harman (Thought [1973], 3.2)
     A reaction: The standard objection to functionalism is to propose absurd implementations of a mind, but probably only a brain could produce the right electro-chemical combination.
Modern connectionism is just Hume's theory of the 'association' of 'ideas' [Fodor]
     Full Idea: Churchland is pushing a version of connectionism ….in which if you think of the elements as "ideas" and call the connections between them "associations", you've got a psychology that is no great advance on David Hume.
     From: Jerry A. Fodor (In a Critical Condition [2000], Ch. 8)
     A reaction: See Fodor's book 'Humean Variations' on how Hume should be improved. This idea strikes me as important for understanding Hume, who is very reticent about what his real views are on the mind.
Hume has no theory of the co-ordination of the mind [Fodor]
     Full Idea: What Hume didn't see was that the causal and representational properties of mental symbols have somehow to be coordinated if the coherence of mental life is to be accounted for.
     From: Jerry A. Fodor (The Elm and the Expert [1993], §4)
     A reaction: Certainly the idea that it all somehow becomes magic at the point where the brain represents the world is incoherent - but it is a bit magical. How can the whole of my garden be in my brain? Weird.
Only the labels of nodes have semantic content in connectionism, and they play no role [Fodor]
     Full Idea: Connectionism has no truck with mental representations; on the one hand, only the node labels in 'neural networks' have semantic content, and, on the other, the node labels play no role in mental processes, in standard formulations.
     From: Jerry A. Fodor (LOT 2 [2008], Ch.1)
     A reaction: Connectionism must have some truth in it, yet mere connections can't do the full job. The difficulty is that nothing else seems to do the 'full job' either. Fodor cites productivity, systematicity, compositionality, logical form as the problems.
Hume's associationism offers no explanation at all of rational thought [Fodor]
     Full Idea: With Associationism there proved to be no way to get a rational mental life to emerge from the sorts of causal relations among thoughts that the 'laws of association' recognised.
     From: Jerry A. Fodor (Psychosemantics [1987], p. 18)
     A reaction: This might not be true if you add the concept of evolution, which has refined the associations to generate truth (which is vital for survival).
Instead of representation by sentences, it can be by a distribution of connectionist strengths [Kirk,R]
     Full Idea: In a connectionist system, information is represented not by sentences but by the total distribution of connection strengths.
     From: Robert Kirk (Mind and Body [2003], §7.6)
     A reaction: Neither sentences (of a language of thought) NOR connection strengths strike me as very plausible ways for a brain to represent things. It must be something to do with connections, but it must also be to do with neurons, or we get bizarre counterexamples.
Connectionism assigns numbers to nodes and branches, and plots the outcomes [Rey]
     Full Idea: In connectionism, each node is given an activation level, and each branch a weight, according to possible degree of effect. This results in 'excitatory' and 'inhibitory' connections.
     From: Georges Rey (Contemporary Philosophy of Mind [1997], 8.8)
     A reaction: Whether such a system could ever be 'conscious' is not the only interesting question. What could such a system do? Could it ever be good at philosophy?
Connectionism explains well speed of perception and 'graceful degradation' [Rey]
     Full Idea: Connectionism is better than other AI strategies at capturing the extraordinary swiftness of perception, and of degrading in a 'graceful' way.
     From: Georges Rey (Contemporary Philosophy of Mind [1997], 8.8)
     A reaction: A good theory had better capture the extraordinary swiftness of perception. Also the swiftness of recognition. Compare seeing a surprising old friend in a crowd, and recognising the person you are looking for.
Connectionism explains irrationality (such as the Gamblers' Fallacy) quite well [Rey]
     Full Idea: Connectionism offers promising accounts of irrational behaviour, such as people's bias towards positive instances, and their tendency to fall for the gamblers' fallacy.
     From: Georges Rey (Contemporary Philosophy of Mind [1997], 8.8)
     A reaction: That is strong support, because the chances of a computational robot having such tendencies is virtually nil, but all humans have the biases referred to (even philosophers).
Pattern recognition is puzzling for computation, but makes sense for connectionism [Rey]
     Full Idea: Connectionism is a way of capturing the holism of pattern recognition, as stressed by many critics of computational theories of mind.
     From: Georges Rey (Contemporary Philosophy of Mind [1997], 8.8)
     A reaction: I am drawn to the idea that arithmetic derives from pattern recognition, and the latter is basic to all minds (a kind of instant unthinking induction), so this seems to me a win for connectionism.
Perceptions could give us information without symbolic representation [Lyons]
     Full Idea: It is possible to give an account of concept-formation without a language of thought or representation, based on perception, which in the brain seems to involve information without representation.
     From: William Lyons (Approaches to Intentionality [1995], p.66)
     A reaction: This claim strikes me as being a little too confident. One might say that a concept IS a representation. However, the perception of several horses might 'blur' together to form a generalised horse.
Neural networks can generalise their training, e.g. truths about tigers apply mostly to lions [Pinker]
     Full Idea: The appeal of neural networks is that they automatically generalize their training to similar new items. If one has been trained to think tigers eat frosted flakes, it will generalise that lions do too, because it knows tigers as sets of features.
     From: Steven Pinker (The Blank Slate [2002], Ch.5)
     A reaction: This certainly is appealing, because it offers a mechanistic account of abstraction and universals, which everyone agrees are central to proper thinking.
There are five types of reasoning that seem beyond connectionist systems [Pinker, by PG]
     Full Idea: Connectionist networks have difficulty with the kind/individual distinction (ducks/this duck), with compositionality (relations), with quantification (reference of 'all'), with recursion (embedded thoughts), and the categorical reasoning (exceptions).
     From: report of Steven Pinker (The Blank Slate [2002], Ch.5) by PG - Db (ideas)
     A reaction: [Read Pinker p.80!] These are essentially all the more sophisticated aspects of logical reasoning that Pinker can think of. Personally I would be reluctant to say a priori that connectionism couldn't cope with these things, just because they seem tough.
Connectionists cannot distinguish concept-memories from their background, or the processes [Machery]
     Full Idea: Connectionists typically do not distinguish between processes and memory stores, and, more importantly, it is unclear whether connectionists can draw a distinction between the knowledge stored in a concept and the background.
     From: Edouard Machery (Doing Without Concepts [2009], 1.1)
     A reaction: In other words connectionism fails to capture the structured nature of our thinking. There is an innate structure (which, say I, should mainly be seen as 'mental files').