21 ideas
13838 | A decent modern definition should always imply a semantics [Hacking] |
Full Idea: Today we expect that anything worth calling a definition should imply a semantics. | |
From: Ian Hacking (What is Logic? [1979], §10) | |
A reaction: He compares this with Gentzen 1935, who was attempting purely syntactic definitions of the logical connectives. |
13833 | 'Thinning' ('dilution') is the key difference between deduction (which allows it) and induction [Hacking] |
Full Idea: 'Dilution' (or 'Thinning') provides an essential contrast between deductive and inductive reasoning; for the introduction of new premises may spoil an inductive inference. | |
From: Ian Hacking (What is Logic? [1979], §06.2) | |
A reaction: That is, inductive logic (if there is such a thing) is clearly non-monotonic, whereas classical inductive logic is monotonic. |
13834 | Gentzen's Cut Rule (or transitivity of deduction) is 'If A |- B and B |- C, then A |- C' [Hacking] |
Full Idea: If A |- B and B |- C, then A |- C. This generalises to: If Γ|-A,Θ and Γ,A |- Θ, then Γ |- Θ. Gentzen called this 'cut'. It is the transitivity of a deduction. | |
From: Ian Hacking (What is Logic? [1979], §06.3) | |
A reaction: I read the generalisation as 'If A can be either a premise or a conclusion, you can bypass it'. The first version is just transitivity (which by-passes the middle step). |
13835 | Only Cut reduces complexity, so logic is constructive without it, and it can be dispensed with [Hacking] |
Full Idea: Only the cut rule can have a conclusion that is less complex than its premises. Hence when cut is not used, a derivation is quite literally constructive, building up from components. Any theorem obtained by cut can be obtained without it. | |
From: Ian Hacking (What is Logic? [1979], §08) |
13845 | The various logics are abstractions made from terms like 'if...then' in English [Hacking] |
Full Idea: I don't believe English is by nature classical or intuitionistic etc. These are abstractions made by logicians. Logicians attend to numerous different objects that might be served by 'If...then', like material conditional, strict or relevant implication. | |
From: Ian Hacking (What is Logic? [1979], §15) | |
A reaction: The idea that they are 'abstractions' is close to my heart. Abstractions from what? Surely 'if...then' has a standard character when employed in normal conversation? |
13840 | First-order logic is the strongest complete compact theory with Löwenheim-Skolem [Hacking] |
Full Idea: First-order logic is the strongest complete compact theory with a Löwenheim-Skolem theorem. | |
From: Ian Hacking (What is Logic? [1979], §13) |
13844 | A limitation of first-order logic is that it cannot handle branching quantifiers [Hacking] |
Full Idea: Henkin proved that there is no first-order treatment of branching quantifiers, which do not seem to involve any idea that is fundamentally different from ordinary quantification. | |
From: Ian Hacking (What is Logic? [1979], §13) | |
A reaction: See Hacking for an example of branching quantifiers. Hacking is impressed by this as a real limitation of the first-order logic which he generally favours. |
13842 | Second-order completeness seems to need intensional entities and possible worlds [Hacking] |
Full Idea: Second-order logic has no chance of a completeness theorem unless one ventures into intensional entities and possible worlds. | |
From: Ian Hacking (What is Logic? [1979], §13) |
13837 | With a pure notion of truth and consequence, the meanings of connectives are fixed syntactically [Hacking] |
Full Idea: My doctrine is that the peculiarity of the logical constants resides precisely in that given a certain pure notion of truth and consequence, all the desirable semantic properties of the constants are determined by their syntactic properties. | |
From: Ian Hacking (What is Logic? [1979], §09) | |
A reaction: He opposes this to Peacocke 1976, who claims that the logical connectives are essentially semantic in character, concerned with the preservation of truth. |
13839 | Perhaps variables could be dispensed with, by arrows joining places in the scope of quantifiers [Hacking] |
Full Idea: For some purposes the variables of first-order logic can be regarded as prepositions and place-holders that could in principle be dispensed with, say by a system of arrows indicating what places fall in the scope of which quantifier. | |
From: Ian Hacking (What is Logic? [1979], §11) | |
A reaction: I tend to think of variables as either pronouns, or as definite descriptions, or as temporary names, but not as prepositions. Must address this new idea... |
13843 | If it is a logic, the Löwenheim-Skolem theorem holds for it [Hacking] |
Full Idea: A Löwenheim-Skolem theorem holds for anything which, on my delineation, is a logic. | |
From: Ian Hacking (What is Logic? [1979], §13) | |
A reaction: I take this to be an unusually conservative view. Shapiro is the chap who can give you an alternative view of these things, or Boolos. |
2526 | Philosophers regularly confuse failures of imagination with insights into necessity [Dennett] |
Full Idea: The besetting foible of philosophers is mistaking failures of imagination for insights into necessity. | |
From: Daniel C. Dennett (Brainchildren [1998], Ch.25) |
2523 | That every mammal has a mother is a secure reality, but without foundations [Dennett] |
Full Idea: Naturalistic philosophers should look with favour on the finite regress that peters out without foundations or thresholds or essences. That every mammal has a mother does not imply an infinite regress. Mammals have secure reality without foundations. | |
From: Daniel C. Dennett (Brainchildren [1998], Ch.25) | |
A reaction: I love this thought, which has permeated my thinking quite extensively. Logicians are terrified of regresses, but this may be because they haven't understood the vagueness of language. |
2528 | Does consciousness need the concept of consciousness? [Dennett] |
Full Idea: You can't have consciousness until you have the concept of consciousness. | |
From: Daniel C. Dennett (Brainchildren [1998], Ch.6) | |
A reaction: If you read enough Dennett this begins to sound vaguely plausible, but next day it sounds like an absurd claim. 'You can't see a tree until you have the concept of a tree?' When do children acquire the concept of consciousness? Are apes non-conscious? |
2525 | Maybe language is crucial to consciousness [Dennett] |
Full Idea: I continue to argue for a crucial role of natural language in generating the central features of consciousness. | |
From: Daniel C. Dennett (Brainchildren [1998], Ch.25) | |
A reaction: 'Central features' might beg the question. Dennett does doubt the consciousness of animals (1996). As I stare out of my window, his proposal seems deeply counterintuitive. How could language 'generate' consciousness? Would loss of language create zombies? |
2527 | Unconscious intentionality is the foundation of the mind [Dennett] |
Full Idea: It is on the foundation of unconscious intentionality that the higher-order complexities developed that have culminated in what we call consciousness. | |
From: Daniel C. Dennett (Brainchildren [1998], Ch.25) | |
A reaction: Sounds right to me. Pace Searle, I have no problem with unconscious intentionality, and the general homuncular picture of low levels building up to complex high levels, which suddenly burst into the song and dance of consciousness. |
2530 | Could a robot be made conscious just by software? [Dennett] |
Full Idea: How could you make a robot conscious? The answer, I think, is to be found in software. | |
From: Daniel C. Dennett (Brainchildren [1998], Ch.6) | |
A reaction: This seems to be a commitment to strong AI, though Dennett is keen to point out that brains are the only plausible implementation of such software. Most find his claim baffling. |
2524 | A language of thought doesn't explain content [Dennett] |
Full Idea: Postulating a language of thought is a postponement of the central problems of content ascription, not a necessary first step. | |
From: Daniel C. Dennett (Brainchildren [1998], Ch.25) | |
A reaction: If the idea of content is built on the idea of representation, then you need some account of what the brain does with its representations. |
2529 | Maybe there can be non-conscious concepts (e.g. in bees) [Dennett] |
Full Idea: Concepts do not require consciousness. As Jaynes says, the bee has a concept of a flower, but not a conscious concept. | |
From: Daniel C. Dennett (Brainchildren [1998], Ch.6) | |
A reaction: Does the flower have a concept of rain? Rain plays a big functional role in its existence. It depends, alas, on what we mean by a 'concept'. |
13304 | Learned men gain more in one day than others do in a lifetime [Posidonius] |
Full Idea: In a single day there lies open to men of learning more than there ever does to the unenlightened in the longest of lifetimes. | |
From: Posidonius (fragments/reports [c.95 BCE]), quoted by Seneca the Younger - Letters from a Stoic 078 | |
A reaction: These remarks endorsing the infinite superiority of the educated to the uneducated seem to have been popular in late antiquity. It tends to be the religions which discourage great learning, especially in their emphasis on a single book. |
20820 | Time is an interval of motion, or the measure of speed [Posidonius, by Stobaeus] |
Full Idea: Posidonius defined time thus: it is an interval of motion, or the measure of speed and slowness. | |
From: report of Posidonius (fragments/reports [c.95 BCE]) by John Stobaeus - Anthology 1.08.42 | |
A reaction: Hm. Can we define motion or speed without alluding to time? Looks like we have to define them as a conjoined pair, which means we cannot fully understand either of them. |