Combining Texts

All the ideas for 'fragments/reports', 'Troubles with Functionalism' and 'Mental Content'

unexpand these ideas     |    start again     |     specify just one area for these texts


27 ideas

14. Science / D. Explanation / 2. Types of Explanation / a. Types of explanation
Some explanations offer to explain a mystery by a greater mystery [Schulte]
     Full Idea: An 'obscurum per obscurius' explanation is explaining something mysterious by something even more mysterious,
     From: Peter Schulte (Mental Content [2023], 6)
     A reaction: Schulte's example is trying to explain mental content in terms of phenomenal experience. That is, roughly, explaining content by qualia, when the latter is the 'hard problem'.
15. Nature of Minds / B. Features of Minds / 5. Qualia / a. Nature of qualia
Lobotomised patients can cease to care about a pain [Block]
     Full Idea: After frontal lobotomies, patients typically report that they still have pains, though the pains no longer bother them.
     From: Ned Block (Troubles with Functionalism [1978], p. 83)
     A reaction: I take this to be an endorsement of reductive physicalism, because what matters about pains is that they bother us, not how they feel, so frog pain could do the job, if it felt different from ours, but was disliked by the frog.
15. Nature of Minds / B. Features of Minds / 5. Qualia / c. Explaining qualia
A brain looks no more likely than anything else to cause qualia [Block]
     Full Idea: NO physical mechanism seems very intuitively plausible as a seat of qualia, least of all a brain.
     From: Ned Block (Troubles with Functionalism [1978], p. 78)
     A reaction: I'm not sure about "least of all", given the mind-boggling complexity of the brain's connections. Certainly, though, nothing in either folk physics or academic physics suggests that any physical object is likely to be aware of anything.
17. Mind and Body / B. Behaviourism / 2. Potential Behaviour
Behaviour requires knowledge as well as dispositions [Block]
     Full Idea: A desire cannot be identified with a disposition to act, since the agent might not know that a particular act leads to the thing desired, and thus might not be disposed to do it.
     From: Ned Block (Troubles with Functionalism [1978], p. 69)
     A reaction: One might have a disposition to act, but not in a particular way. "Something must be done". To get to the particular act, it seems that indeed a belief must be added to the desire.
17. Mind and Body / C. Functionalism / 1. Functionalism
In functionalism, desires are internal states with causal relations [Block]
     Full Idea: According to functionalism, a system might have the behaviouristic input-output relations, yet not desire something, as this requires internal states with certain causal relations.
     From: Ned Block (Troubles with Functionalism [1978], p. 69)
     A reaction: Such a system might be Putnam's 'superactor', who only behaves as if he desires something. Of course, the internal states might need more than just 'causal relations'.
Functionalism is behaviourism, but with mental states as intermediaries [Block]
     Full Idea: Functionalism is a new incarnation of behaviourism, replacing sensory inputs with sensory inputs plus mental states, and replacing dispositions to act with dispositions plus certain mental states.
     From: Ned Block (Troubles with Functionalism [1978], p. 69)
     A reaction: I think of functionalism as behaviourism which extends inside the 'black box' between stimulus and response. It proposes internal stimuli and responses. Consequently functionalism inherits some behaviourist problems.
You might invert colours, but you can't invert beliefs [Block]
     Full Idea: It is hard to see how to make sense of the analog of color spectrum inversion with respect to non-qualitative states such a beliefs (where they are functionally equivalent but have different beliefs).
     From: Ned Block (Troubles with Functionalism [1978], p. 81)
     A reaction: I would suggest that beliefs can be 'inverted', because there are all sorts of ways to implement a belief, but colour can't be inverted, because that depends on a particular brain state. It makes good sense to me...
17. Mind and Body / C. Functionalism / 8. Functionalism critique
Could a creature without a brain be in the right functional state for pain? [Block]
     Full Idea: If pain is a functional state, it cannot be a brain state, because creatures without brains could realise the same Turing machine as creatures with brains.
     From: Ned Block (Troubles with Functionalism [1978], p. 70)
     A reaction: This strikes me as being a poorly grounded claim. There may be some hypothetical world where brainless creatures implement all our functions, but from here brains look the only plausible option.
Not just any old functional network will have mental states [Block]
     Full Idea: If there are any fixed points in the mind-body problem, one of them is that the economy of Bolivia could not have mental states, no matter how it is distorted.
     From: Ned Block (Troubles with Functionalism [1978], p. 86)
     A reaction: It is hard to disagree with this, but then it can hardly be a serious suggestion that anyone could see how to reconfigure an economy so that it mapped the functional state of the human brain. This is not a crucial problem.
In functionalism, what are the special inputs and outputs of conscious creatures? [Block]
     Full Idea: In functionalism, it is very hard to see how there could be a single physical characterization of the inputs and outputs of all and only creatures with mentality.
     From: Ned Block (Troubles with Functionalism [1978], p. 87)
     A reaction: It would be theoretically possible if the only way to achieve mentality was to have a particular pattern of inputs and outputs. I don't think, though, that 'mentality' is an all-or-nothing concept.
17. Mind and Body / E. Mind as Physical / 7. Anti-Physicalism / b. Multiple realisability
Physicalism is prejudiced in favour of our neurology, when other systems might have minds [Block]
     Full Idea: Physicalism is a chauvinist theory: it withholds mental properties from systems that in fact have them.
     From: Ned Block (Troubles with Functionalism [1978], p. 71)
     A reaction: This criticism interprets physicalism too rigidly. There may be several ways to implement a state. My own view is that other systems might implement our functions, but they won't experience them in a human way.
18. Thought / B. Mechanics of Thought / 6. Artificial Thought / b. Turing Machines
Simple machine-functionalism says mind just is a Turing machine [Block]
     Full Idea: In the simplest Turing-machine version of functionalism (Putnam 1967), mental states are identified with the total Turing-machine state, involving a machine table and its inputs and outputs.
     From: Ned Block (Troubles with Functionalism [1978], p. 70)
     A reaction: This obviously invites the question of why mental states would be conscious and phenomenal, given that modern computers are devoid of same, despite being classy Turing machines.
A Turing machine, given a state and input, specifies an output and the next state [Block]
     Full Idea: In a Turing machine, given any state and input, the machine table specifies an output and the next state. …To have full power the tape must be infinite in at least one direction, and be movable in both directions.
     From: Ned Block (Troubles with Functionalism [1978], p. 71)
     A reaction: In retrospect, the proposal that this feeble item should be taken as a model for the glorious complexity and richness of human consciousness doesn't look too plausible.
18. Thought / C. Content / 1. Content
Naturalists must explain both representation, and what is represented [Schulte]
     Full Idea: Naturalistic accounts of content ask 1) what makes a state qualify as a representational state?, and 2) what makes a representational state have one specific content rather than another?
     From: Peter Schulte (Mental Content [2023], 4)
     A reaction: [As often in this collection, the author uses algebraic letters, but I prefer plain English] I would say that the first question looks more amenable to an answer than the second. Do we know the neuronal difference between seeing red and blue?
Phenomenal and representational character may have links, or even be united [Schulte]
     Full Idea: Some theorists maintain that all states with representational content or intentionality must have phenomenal character …and we can also ask whether all states with phenomenal character also have representional content.
     From: Peter Schulte (Mental Content [2023], 2.4)
     A reaction: He mentions that beliefs could involve inner speech. And pains and moods may be phenomenal but lack content. He also asks which determines which.
Naturalistic accounts of content cannot rely on primitive mental or normative notions [Schulte]
     Full Idea: A 'naturalistic' explanation of content excludes primitive mental or normative notions, but allows causation, counterfactual dependence, probabilistic dependence or structural similarity.
     From: Peter Schulte (Mental Content [2023], 4)
     A reaction: Apart from causation, what is permissible to naturalists (like me) all sounds rather superficial (and thus not very explanatory). I'm sure we can do better than this. How about using non-primitive mental notions?
Maybe we can explain mental content in terms of phenomenal properties [Schulte]
     Full Idea: The phenomenal intentionality approach says that the content properties of mental states can be explained in terms of the phenomenal properties of mental states.
     From: Peter Schulte (Mental Content [2023], 6)
     A reaction: [Searle and Loar are cited] Tends to be 'non-naturalistic'. We might decide that content derives from the phenomenal, but still without saying anything interesting about content. Mathematical content? Universally generalised content?
Naturalist accounts of representation must match the views of cognitive science [Schulte]
     Full Idea: Recent naturalisation of content now also has to offer a matching account of representational explanations in cognitive science.
     From: Peter Schulte (Mental Content [2023], 08.1)
     A reaction: [He cites Cummins, Neander and Shea] This is in addition to the 'status' and 'content' questions of Idea 23796. This seems to be an interesting shift to philosophers working backwards from the theories of empirical science. Few are qualified for this job!
On the whole, referential content is seen as broad, and sense content as narrow [Schulte]
     Full Idea: We can say that non-Fregean content [reference] is (virtually) always contrued as broad, while Fregean content [sense] is usually contrued as narrow.
     From: Peter Schulte (Mental Content [2023], 3.2)
     A reaction: I can't make sense of mental content actually being outside the mind, so I see all content as narrow - but that doesn't mean that externals are irrelevant to it. If I think that is an oak, and it's an elm, the content is oak.
18. Thought / C. Content / 9. Conceptual Role Semantics
Conceptual role semantics says content is determined by cognitive role [Schulte]
     Full Idea: Conceptual role semantics says the content of a representation is determined by the cognitive role it plays with a system.
     From: Peter Schulte (Mental Content [2023], 4.5)
     A reaction: Obvious problem: if 'swordfish' is the password, its role is quite different from its content. I've never thought that the role of something tells you anything about what it is. Hearts pump blood, but how do they fulfil that role?
18. Thought / C. Content / 10. Causal Semantics
Cause won't explain content, because one cause can produce several contents [Schulte]
     Full Idea: A simple causal theory of content has the 'content indeterminacy' problem - that the presence of a cow causes 'a cow is present', but also 'an animal is present' and 'a biological organism is present'.
     From: Peter Schulte (Mental Content [2023], 4.1)
     A reaction: That only rules out the 'simple' version. We just need to add that the cause (cow experience) is shaped by current knowledge and interests. Someone buying cows and someone terrified of them thereby produce different concepts.
18. Thought / C. Content / 11. Teleological Semantics
Teleosemantics explains content in terms of successful and unsuccessful functioning [Schulte]
     Full Idea: The core idea of teleosemantics is that we need to explain how content can be accurate or inaccurate, true or false, realised or unrealised …which must appeal to the distinction between proper functioning and malfunctioning.
     From: Peter Schulte (Mental Content [2023], 4.4)
     A reaction: My immediate reaction to this is that you don't learn about content by assessing its success. Surely (as with eyesight) you first need to understand what it does, and only then judge its success. …Though success and failure are implicit in function.
Teleosemantic explanations say content is the causal result of naturally selected functions [Schulte]
     Full Idea: Teleosemantic theories usually give a causal account of mental functions …where some trait has a particular function if it was selected for that function by a process of natural selection.
     From: Peter Schulte (Mental Content [2023], 4.4)
     A reaction: This is an idea I like - that something has a specific function if without that function it wouldn't have come into existence (eyes, for example). But presumably the function of a mind is to collect content - which does nothing to explain content!
18. Thought / C. Content / 12. Informational Semantics
Information theories say content is information, such as smoke making fire probable [Schulte]
     Full Idea: Information theories of content [usually assume that] a column of smoke over there carries the information that fire is over there because it raises the probability of fire being over there.
     From: Peter Schulte (Mental Content [2023], 4.2)
     A reaction: Theorists usually add further conditions to this basic one. Fred Dretske is the source of this approach. Not promising, in my opinion. Surely the content is just smoke, and fire is one of dozens of possible inferences from it?
19. Language / C. Assigning Meanings / 1. Syntax
Intuition may say that a complex sentence is ungrammatical, but linguistics can show that it is not [Block]
     Full Idea: Linguistics rejects (on theoretical grounds) the intuition that the sentence "the boy the girl the cat bit scratched died" is ungrammatical.
     From: Ned Block (Troubles with Functionalism [1978], p. 78)
     A reaction: Once we have disentangled it, we practical speakers have no right to say it is ungrammatical. It isn't only theory. The sentence is just stylistically infelicitous.
26. Natural Theory / A. Speculations on Nature / 5. Infinite in Nature
Archelaus was the first person to say that the universe is boundless [Archelaus, by Diog. Laertius]
     Full Idea: Archelaus was the first person to say that the universe is boundless.
     From: report of Archelaus (fragments/reports [c.450 BCE]) by Diogenes Laertius - Lives of Eminent Philosophers 02.Ar.3
27. Natural Reality / G. Biology / 3. Evolution
Archelaus said life began in a primeval slime [Archelaus, by Schofield]
     Full Idea: Archelaus wrote that life on Earth began in a primeval slime.
     From: report of Archelaus (fragments/reports [c.450 BCE]) by Malcolm Schofield - Archelaus
     A reaction: This sounds like a fairly clearcut assertion of the production of life by evolution. Darwin's contribution was to propose the mechanism for achieving it. We should honour the name of Archelaus for this idea.