Combining Texts

Ideas for 'The Elm and the Expert', 'Introduction to the Philosophy of Mind' and 'The Scope and Language of Science'

unexpand these ideas     |    start again     |     choose another area for these texts

display all the ideas for this combination of texts


16 ideas

18. Thought / A. Modes of Thought / 1. Thought
Some behaviourists believe thought is just suppressed speech [Lowe]
     Full Idea: Some behaviourists have held the view that thinking just is, in effect, suppressed speech.
     From: E.J. Lowe (Introduction to the Philosophy of Mind [2000], Ch. 7)
     A reaction: He cites J.B.Watson. This would imply that infants and animals can't think. Introspecting my own case, I don't believe it. When I am navigating through a town, for example, I directly relate to my mental map; I see little sign of anything verbal.
18. Thought / A. Modes of Thought / 2. Propositional Attitudes
Propositional attitudes are propositions presented in a certain way [Fodor]
     Full Idea: Propositional attitudes are really three-place relations, between a creature, a proposition, and a mode of presentation (which are sentences of Mentalese).
     From: Jerry A. Fodor (The Elm and the Expert [1993], §2.II)
     A reaction: I'm not sure about 'really'! Why do we need a creature? Isn't 'hoping it will rain' a propositional attitude which some creature may or may not have? Fodor wants it to be physical, but it's abstract?
18. Thought / A. Modes of Thought / 5. Rationality / a. Rationality
Rationality has mental properties - autonomy, productivity, experiment [Fodor]
     Full Idea: Mentalism isn't gratuitous; you need it to explain rationality. Mental causation buys you behaviours that are unlike reflexes in at least three ways: they're autonomous, they're productive, and they're experimental.
     From: Jerry A. Fodor (The Elm and the Expert [1993], §4)
     A reaction: He makes his three ways sound all-or-nothing, which is (I believe) the single biggest danger when thinking about the mind. "Either you are conscious, or you are not..."
18. Thought / A. Modes of Thought / 5. Rationality / b. Human rationality
People are wildly inaccurate in estimating probabilities about an observed event [Lowe]
     Full Idea: In the 'cab problem' (what colour was the cab in the accident?) most people estimate an 80% probability of it being a blue cab, but Bayes' Theorem calculates the probability at 41%, suggesting people put too much faith in eyewitness testimony.
     From: E.J. Lowe (Introduction to the Philosophy of Mind [2000], Ch. 8)
     A reaction: For details of the 'cab problem', see Lowe p.200. My suspicion is that people get into a tangle when confronted with numbers in a theoretical situation, but are much better at it when faced with a real life problem, like 'who ate my chocolate?'
'Base rate neglect' makes people favour the evidence over its background [Lowe]
     Full Idea: 'Base rate neglect' (attending to the witness or evidence, and ignoring background information) is responsible for doctors exaggerating the significance of positive results in diagnosis of relatively rare medical conditions.
     From: E.J. Lowe (Introduction to the Philosophy of Mind [2000], Ch. 8)
     A reaction: This seems to be one of the clearest cases where people's behaviour is irrational, though I suspect that people are much more rational about things if the case is simple and non-numerical. However, people are very credulous about wonderful events.
18. Thought / B. Mechanics of Thought / 6. Artificial Thought / a. Artificial Intelligence
The 'Frame Problem' is how to program the appropriate application of general knowledge [Lowe]
     Full Idea: The 'Frame Problem' in artificial intelligence is how to write a program which not only embodies people's general knowledge, but specifies how that knowledge is to be applied appropriately, when circumstances can't be specified in advance.
     From: E.J. Lowe (Introduction to the Philosophy of Mind [2000], Ch. 8)
     A reaction: As Lowe observes, this is a problem, but not necessarily an impossibility. There should be a way to symbolically map the concepts of knowledge onto the concepts of perception, just as we must do.
Computers can't be rational, because they lack motivation and curiosity [Lowe]
     Full Idea: Lack of motivation and curiosity are perhaps the most fundamental reason for denying that computers could be, in any literal sense, rational beings.
     From: E.J. Lowe (Introduction to the Philosophy of Mind [2000], Ch. 9)
     A reaction: I don't see why programmers couldn't move those two priorities to the top of the list in the program. When you switch on a robot, its first words could be 'Teach me something!', or 'Let's do something interesting!' Every piece of software has priorities.
18. Thought / B. Mechanics of Thought / 6. Artificial Thought / c. Turing Test
The Turing test is too behaviourist, and too verbal in its methods [Lowe]
     Full Idea: The Turing test is open to the objection that it is inspired by behaviourist assumptions and focuses too narrowly on verbal evidence of intelligence.
     From: E.J. Lowe (Introduction to the Philosophy of Mind [2000], Ch. 8)
     A reaction: This is part of the objection that the test exhibits human chauvinism, and robots and aliens are wasting their time trying to pass it. You need human behaviour, especially speech, to do well. Inarticulate people can exhibit high practical intelligence.
18. Thought / C. Content / 1. Content
The naturalistic views of how content is created are the causal theory and the teleological theory [Lowe]
     Full Idea: The leading naturalistic theories of what it is that confers a specific content upon a given attitudinal state are the causal theory, and the teleological theory, both of which contain serious difficulties.
     From: E.J. Lowe (Introduction to the Philosophy of Mind [2000], Ch. 4)
     A reaction: 'Causal' theories (Fodor) say the world directly causes content; 'teleological' theories (Millikan, Papineau) are based on the evolutionary purpose of content for the subject. I agree that neither seems adequate…
18. Thought / C. Content / 5. Twin Earth
XYZ (Twin Earth 'water') is an impossibility [Fodor]
     Full Idea: There isn't any XYZ, and there couldn't be any, and so we don't have to worry about it.
     From: Jerry A. Fodor (The Elm and the Expert [1993], §2.I)
     A reaction: Jadeite and Nephrite are real enough, which are virtually indistinguishable variants of jade. You just need Twin Jewellers instead of Twin Earths. We could build them, and employ twins to work there.
Twin Earth cases imply that even beliefs about kinds of stuff are indexical [Lowe]
     Full Idea: The implication of considerations of Twin Earth cases is that even beliefs about the properties of kinds of stuff are implicitly indexical, or context-dependent, in character.
     From: E.J. Lowe (Introduction to the Philosophy of Mind [2000], Ch. 4)
     A reaction: This is a significant connection, between debates about the nature of indexicals (such as 'I' and 'this') and externalism about content generally. Is there no distinction between objective reference and contextual reference?
18. Thought / C. Content / 6. Broad Content
Truth conditions require a broad concept of content [Fodor]
     Full Idea: We need the idea of broad content to make sense of the fact that thoughts have the truth-conditions that they do.
     From: Jerry A. Fodor (The Elm and the Expert [1993], §2.II)
     A reaction: There seems to be (as Dummett points out) a potential circularity here, as you can hardly know the truth-conditions of something if you don't already know its content.
18. Thought / C. Content / 7. Narrow Content
Concepts aren't linked to stuff; they are what is caused by stuff [Fodor]
     Full Idea: If the words of 'Swamp Man' (spontaneously created, with concepts) are about XYZ on Twin Earth, it is not because he's causally connected to the stuff, but because XYZ would cause his 'water' tokens (in the absence of H2O).
     From: Jerry A. Fodor (The Elm and the Expert [1993], App B)
     A reaction: The sight of the Eiffel tower causes my 'France' tokens, so is my word "France" about the Eiffel Tower? What would cause my 'nothing' tokens?
18. Thought / C. Content / 10. Causal Semantics
Knowing the cause of a thought is almost knowing its content [Fodor]
     Full Idea: If you know the content of a thought, you know quite a lot about what would cause you to have it.
     From: Jerry A. Fodor (The Elm and the Expert [1993], §4)
     A reaction: I'm not sure where this fits into the great jigsaw of the mind, but it strikes me as an acute and important observation. The truth of a thought is not essential to make you have it. Ask Othello.
18. Thought / C. Content / 12. Informational Semantics
Is content basically information, fixed externally? [Fodor]
     Full Idea: I assume intentional content reduces (in some way) to information. …The content of a thought depends on its external relations; on the way that the thought is related to the world, not the way that it is related to other thoughts.
     From: Jerry A. Fodor (The Elm and the Expert [1993], §1.2)
     A reaction: Does this make Fodor a 'weak' functionalist? The 'strong' version would say a thought is merely a location in a flow diagram, but Fodor's 'mentalism' includes a further 'content' in each diagram box.
18. Thought / D. Concepts / 3. Ontology of Concepts / b. Concepts as abilities
In the information view, concepts are potentials for making distinctions [Fodor]
     Full Idea: Semantics, according to the informational view, is mostly about counterfactuals; what counts for the identity of my concepts is not what I do distinguish but what I could distinguish if I cared to (even using instruments and experts).
     From: Jerry A. Fodor (The Elm and the Expert [1993], §2.I)
     A reaction: We all differ in our discriminations (and awareness of expertise), so our concepts would differ, which is bad news for communication (see Idea 223). The view has some plausibility, though.