Combining Texts

All the ideas for 'What is Logic?st1=Ian Hacking', 'Introduction - Ontology' and 'Human Knowledge: its scope and limits'

unexpand these ideas     |    start again     |     specify just one area for these texts


21 ideas

2. Reason / D. Definition / 3. Types of Definition
A decent modern definition should always imply a semantics [Hacking]
     Full Idea: Today we expect that anything worth calling a definition should imply a semantics.
     From: Ian Hacking (What is Logic? [1979], §10)
     A reaction: He compares this with Gentzen 1935, who was attempting purely syntactic definitions of the logical connectives.
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / d. Basic theorems of PL
'Thinning' ('dilution') is the key difference between deduction (which allows it) and induction [Hacking]
     Full Idea: 'Dilution' (or 'Thinning') provides an essential contrast between deductive and inductive reasoning; for the introduction of new premises may spoil an inductive inference.
     From: Ian Hacking (What is Logic? [1979], §06.2)
     A reaction: That is, inductive logic (if there is such a thing) is clearly non-monotonic, whereas classical inductive logic is monotonic.
Gentzen's Cut Rule (or transitivity of deduction) is 'If A |- B and B |- C, then A |- C' [Hacking]
     Full Idea: If A |- B and B |- C, then A |- C. This generalises to: If Γ|-A,Θ and Γ,A |- Θ, then Γ |- Θ. Gentzen called this 'cut'. It is the transitivity of a deduction.
     From: Ian Hacking (What is Logic? [1979], §06.3)
     A reaction: I read the generalisation as 'If A can be either a premise or a conclusion, you can bypass it'. The first version is just transitivity (which by-passes the middle step).
Only Cut reduces complexity, so logic is constructive without it, and it can be dispensed with [Hacking]
     Full Idea: Only the cut rule can have a conclusion that is less complex than its premises. Hence when cut is not used, a derivation is quite literally constructive, building up from components. Any theorem obtained by cut can be obtained without it.
     From: Ian Hacking (What is Logic? [1979], §08)
5. Theory of Logic / A. Overview of Logic / 4. Pure Logic
The various logics are abstractions made from terms like 'if...then' in English [Hacking]
     Full Idea: I don't believe English is by nature classical or intuitionistic etc. These are abstractions made by logicians. Logicians attend to numerous different objects that might be served by 'If...then', like material conditional, strict or relevant implication.
     From: Ian Hacking (What is Logic? [1979], §15)
     A reaction: The idea that they are 'abstractions' is close to my heart. Abstractions from what? Surely 'if...then' has a standard character when employed in normal conversation?
5. Theory of Logic / A. Overview of Logic / 5. First-Order Logic
First-order logic is the strongest complete compact theory with Löwenheim-Skolem [Hacking]
     Full Idea: First-order logic is the strongest complete compact theory with a Löwenheim-Skolem theorem.
     From: Ian Hacking (What is Logic? [1979], §13)
A limitation of first-order logic is that it cannot handle branching quantifiers [Hacking]
     Full Idea: Henkin proved that there is no first-order treatment of branching quantifiers, which do not seem to involve any idea that is fundamentally different from ordinary quantification.
     From: Ian Hacking (What is Logic? [1979], §13)
     A reaction: See Hacking for an example of branching quantifiers. Hacking is impressed by this as a real limitation of the first-order logic which he generally favours.
5. Theory of Logic / A. Overview of Logic / 7. Second-Order Logic
Second-order completeness seems to need intensional entities and possible worlds [Hacking]
     Full Idea: Second-order logic has no chance of a completeness theorem unless one ventures into intensional entities and possible worlds.
     From: Ian Hacking (What is Logic? [1979], §13)
5. Theory of Logic / E. Structures of Logic / 2. Logical Connectives / a. Logical connectives
With a pure notion of truth and consequence, the meanings of connectives are fixed syntactically [Hacking]
     Full Idea: My doctrine is that the peculiarity of the logical constants resides precisely in that given a certain pure notion of truth and consequence, all the desirable semantic properties of the constants are determined by their syntactic properties.
     From: Ian Hacking (What is Logic? [1979], §09)
     A reaction: He opposes this to Peacocke 1976, who claims that the logical connectives are essentially semantic in character, concerned with the preservation of truth.
5. Theory of Logic / E. Structures of Logic / 2. Logical Connectives / c. not
Is it possible to state every possible truth about the whole course of nature without using 'not'? [Russell]
     Full Idea: Imagine a person who knew everything that can be stated without using the word 'not' or some equivalent; would such a person know the whole course of nature, or would he not?
     From: Bertrand Russell (Human Knowledge: its scope and limits [1948], 9)
     A reaction: Nowadays we might express Russell's thought as 'Does God need the word 'not'?'. Russell's thesis is that such words concern psychology, and not physics. God would need 'not' to describe how human minds work.
5. Theory of Logic / E. Structures of Logic / 4. Variables in Logic
Perhaps variables could be dispensed with, by arrows joining places in the scope of quantifiers [Hacking]
     Full Idea: For some purposes the variables of first-order logic can be regarded as prepositions and place-holders that could in principle be dispensed with, say by a system of arrows indicating what places fall in the scope of which quantifier.
     From: Ian Hacking (What is Logic? [1979], §11)
     A reaction: I tend to think of variables as either pronouns, or as definite descriptions, or as temporary names, but not as prepositions. Must address this new idea...
5. Theory of Logic / J. Model Theory in Logic / 3. Löwenheim-Skolem Theorems
If it is a logic, the Löwenheim-Skolem theorem holds for it [Hacking]
     Full Idea: A Löwenheim-Skolem theorem holds for anything which, on my delineation, is a logic.
     From: Ian Hacking (What is Logic? [1979], §13)
     A reaction: I take this to be an unusually conservative view. Shapiro is the chap who can give you an alternative view of these things, or Boolos.
7. Existence / C. Structure of Existence / 3. Levels of Reality
Biologists see many organic levels, 'abstract' if seen from below, 'structural' if seen from above [Lycan]
     Full Idea: Biologists don't split living things into a 'structural' level and an 'abstract' level; ..rather, they are organised at many levels, each level 'abstract' with respect to those beneath it, but 'structural' as it realises those levels above it.
     From: William Lycan (Introduction - Ontology [1999], p.9)
     A reaction: This is a very helpful distinction. Compare Idea 4601. It seems to fit well with the 'homuncular' picture of a hierarchical mind, and explains why there are so many levels of description available for mental life.
9. Objects / F. Identity among Objects / 6. Identity between Objects
'Lightning is electric discharge' and 'Phosphorus is Venus' are synthetic a posteriori identities [Lycan]
     Full Idea: There is such a thing as synthetic and a posteriori identity that is nonetheless genuine identity, as in lightning being electrical discharge, and the Morning Star being Venus.
     From: William Lycan (Introduction - Ontology [1999], p.5)
     A reaction: It is important to note that although these identities are synthetic a posteriori, that doesn't make them contingent. The early identity theorists like Smart seemed to think that it did. Kripke must be right that they are necessary identities.
10. Modality / A. Necessity / 6. Logical Necessity
Some facts about experience feel like logical necessities [Russell]
     Full Idea: The impossibility of seeing two colours simultaneously in a given direction feels like a logical impossibility.
     From: Bertrand Russell (Human Knowledge: its scope and limits [1948], 9)
     A reaction: I presume all necessities feel equally necessary. If we distinguish necessities by what gives rise to them (a view I favour) then how strong they 'feel' will be irrelevant. We can see why Russell is puzzled by the phenomenon, though.
12. Knowledge Sources / D. Empiricism / 5. Empiricism Critique
It is hard to explain how a sentence like 'it is not raining' can be found true by observation [Russell]
     Full Idea: If 'it is not raining' means 'the sentence "it is raining" is false', that makes it almost impossible to understand how a sentence containing the word 'not' can be found true by observation.
     From: Bertrand Russell (Human Knowledge: its scope and limits [1948], 9)
     A reaction: Russell goes on to explore the general difficulty of deciding negative truths by observation. The same problem arises for truthmaker theory. Obviously I can observe that it isn't raining, but it seems parasitic on observing when it is raining.
17. Mind and Body / C. Functionalism / 2. Machine Functionalism
Functionalism has three linked levels: physical, functional, and mental [Lycan]
     Full Idea: Functionalism has three distinct levels of description: a neurophysiological description, a functional description (relative to a program which the brain is realising), and it may have a further mental description.
     From: William Lycan (Introduction - Ontology [1999], p.6)
     A reaction: I have always thought that the 'levels of description' idea was very helpful in describing the mind/brain. I feel certain that we are dealing with a single thing, so this is the only way we can account for the diverse ways in which we discuss it.
17. Mind and Body / C. Functionalism / 5. Teleological Functionalism
A mental state is a functional realisation of a brain state when it serves the purpose of the organism [Lycan]
     Full Idea: Some theorists have said that the one-to-one correspondence between the organism and parts of its 'program' is too liberal, and suggest that the state and its functional role are seen teleologically, as functioning 'for' the organism.
     From: William Lycan (Introduction - Ontology [1999], p.9)
     A reaction: This seems an inevitable development, once the notion of a 'function' is considered. It has to be fitted into some sort of Aristotelian teleological picture, even if the functions are seen subjectively (by what?). Purpose is usually seen as evolutionary.
19. Language / F. Communication / 3. Denial
If we define 'this is not blue' as disbelief in 'this is blue', we eliminate 'not' as an ingredient of facts [Russell]
     Full Idea: We can reintroduce 'not' by a definition: the words 'this is not blue' are defined as expressing disbelief in what is expressed by the words 'this is blue'. In this way the need of 'not' as an indefinable constituent of facts is avoided.
     From: Bertrand Russell (Human Knowledge: its scope and limits [1948], 9)
     A reaction: This is part of Russell's programme of giving a psychological account of logical connectives. See other ideas from his 1940 and 1948 works. He observes that disbelief is a state just as positive as belief. I love it.
26. Natural Theory / A. Speculations on Nature / 2. Natural Purpose / c. Purpose denied
People are trying to explain biological teleology in naturalistic causal terms [Lycan]
     Full Idea: There is now a small but vigorous industry whose purpose is to explicate biological teleology in naturalistic terms, typically in terms of causes.
     From: William Lycan (Introduction - Ontology [1999], p.10)
     A reaction: This looks like a good strategy. In some sense, it seems clear that the moon has no purpose, but an eyeball has one. Via evolution, one would expect to reduce this to causation. Purposes are real (not subjective), but they are reducible.
27. Natural Reality / A. Classical Physics / 1. Mechanics / a. Explaining movement
Russell's 'at-at' theory says motion is to be at the intervening points at the intervening instants [Russell, by Psillos]
     Full Idea: To reply to Zeno's Arrow Paradox, Russell developed his 'at-at' theory of motion, which says that to move from A to B is to be at the intervening points at the intervening instants.
     From: report of Bertrand Russell (Human Knowledge: its scope and limits [1948]) by Stathis Psillos - Causation and Explanation §4.2
     A reaction: I wonder whether Russell's target was actually Zeno, or was it a simplified ontology of points and instants? The ontology will also need identity, to ensure it is the same thing which arrives at each point.