Ideas of Ian Hacking, by Theme

[Canadian, b.1936, At theUniversity of Toronto, and at Stanford University.]

green numbers give full details    |    back to list of philosophers    |     unexpand these ideas    |    
1. Philosophy / C. History of Philosophy / 4. Later European Philosophy / b. Seventeenth century philosophy
Gassendi is the first great empiricist philosopher
     Full Idea: Gassendi is the first in the great line of empiricist philosophers that gradually came to dominate European thought.
     From: Ian Hacking (The Emergence of Probability [1975], Ch.5)
     A reaction: Epicurus, of course, was clearly an empiricist. British readers should note that Gassendi was not British.
2. Reason / D. Definition / 3. Types of Definition
A decent modern definition should always imply a semantics
     Full Idea: Today we expect that anything worth calling a definition should imply a semantics.
     From: Ian Hacking (What is Logic? [1979], §10)
     A reaction: He compares this with Gentzen 1935, who was attempting purely syntactic definitions of the logical connectives.
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / d. Basic theorems of PL
'Thinning' ('dilution') is the key difference between deduction (which allows it) and induction
     Full Idea: 'Dilution' (or 'Thinning') provides an essential contrast between deductive and inductive reasoning; for the introduction of new premises may spoil an inductive inference.
     From: Ian Hacking (What is Logic? [1979], §06.2)
     A reaction: That is, inductive logic (if there is such a thing) is clearly non-monotonic, whereas classical inductive logic is monotonic.
Gentzen's Cut Rule (or transitivity of deduction) is 'If A |- B and B |- C, then A |- C'
     Full Idea: If A |- B and B |- C, then A |- C. This generalises to: If Γ|-A,Θ and Γ,A |- Θ, then Γ |- Θ. Gentzen called this 'cut'. It is the transitivity of a deduction.
     From: Ian Hacking (What is Logic? [1979], §06.3)
     A reaction: I read the generalisation as 'If A can be either a premise or a conclusion, you can bypass it'. The first version is just transitivity (which by-passes the middle step).
Only Cut reduces complexity, so logic is constructive without it, and it can be dispensed with
     Full Idea: Only the cut rule can have a conclusion that is less complex than its premises. Hence when cut is not used, a derivation is quite literally constructive, building up from components. Any theorem obtained by cut can be obtained without it.
     From: Ian Hacking (What is Logic? [1979], §08)
5. Theory of Logic / A. Overview of Logic / 4. Pure Logic
The various logics are abstractions made from terms like 'if...then' in English
     Full Idea: I don't believe English is by nature classical or intuitionistic etc. These are abstractions made by logicians. Logicians attend to numerous different objects that might be served by 'If...then', like material conditional, strict or relevant implication.
     From: Ian Hacking (What is Logic? [1979], §15)
     A reaction: The idea that they are 'abstractions' is close to my heart. Abstractions from what? Surely 'if...then' has a standard character when employed in normal conversation?
5. Theory of Logic / A. Overview of Logic / 5. First-Order Logic
First-order logic is the strongest complete compact theory with Löwenheim-Skolem
     Full Idea: First-order logic is the strongest complete compact theory with a Löwenheim-Skolem theorem.
     From: Ian Hacking (What is Logic? [1979], §13)
A limitation of first-order logic is that it cannot handle branching quantifiers
     Full Idea: Henkin proved that there is no first-order treatment of branching quantifiers, which do not seem to involve any idea that is fundamentally different from ordinary quantification.
     From: Ian Hacking (What is Logic? [1979], §13)
     A reaction: See Hacking for an example of branching quantifiers. Hacking is impressed by this as a real limitation of the first-order logic which he generally favours.
5. Theory of Logic / A. Overview of Logic / 7. Second-Order Logic
Second-order completeness seems to need intensional entities and possible worlds
     Full Idea: Second-order logic has no chance of a completeness theorem unless one ventures into intensional entities and possible worlds.
     From: Ian Hacking (What is Logic? [1979], §13)
5. Theory of Logic / E. Structures of Logic / 2. Logical Connectives / a. Logical connectives
With a pure notion of truth and consequence, the meanings of connectives are fixed syntactically
     Full Idea: My doctrine is that the peculiarity of the logical constants resides precisely in that given a certain pure notion of truth and consequence, all the desirable semantic properties of the constants are determined by their syntactic properties.
     From: Ian Hacking (What is Logic? [1979], §09)
     A reaction: He opposes this to Peacocke 1976, who claims that the logical connectives are essentially semantic in character, concerned with the preservation of truth.
5. Theory of Logic / E. Structures of Logic / 4. Variables in Logic
Perhaps variables could be dispensed with, by arrows joining places in the scope of quantifiers
     Full Idea: For some purposes the variables of first-order logic can be regarded as prepositions and place-holders that could in principle be dispensed with, say by a system of arrows indicating what places fall in the scope of which quantifier.
     From: Ian Hacking (What is Logic? [1979], §11)
     A reaction: I tend to think of variables as either pronouns, or as definite descriptions, or as temporary names, but not as prepositions. Must address this new idea...
5. Theory of Logic / J. Model Theory in Logic / 3. Löwenheim-Skolem Theorems
If it is a logic, the Löwenheim-Skolem theorem holds for it
     Full Idea: A Löwenheim-Skolem theorem holds for anything which, on my delineation, is a logic.
     From: Ian Hacking (What is Logic? [1979], §13)
     A reaction: I take this to be an unusually conservative view. Shapiro is the chap who can give you an alternative view of these things, or Boolos.
10. Modality / B. Possibility / 6. Probability
Probability was fully explained between 1654 and 1812
     Full Idea: There is hardly any history of probability to record before Pascal (1654), and the whole subject is very well understood after Laplace (1812).
     From: Ian Hacking (The Emergence of Probability [1975], Ch.1)
     A reaction: An interesting little pointer on the question of whether the human race is close to exhausting all the available intellectual problems. What then?
Probability is statistical (behaviour of chance devices) or epistemological (belief based on evidence)
     Full Idea: Probability has two aspects: the degree of belief warranted by evidence, and the tendency displayed by some chance device to produce stable relative frequencies. These are the epistemological and statistical aspects of the subject.
     From: Ian Hacking (The Emergence of Probability [1975], Ch.1)
     A reaction: The most basic distinction in the subject. Later (p.124) he suggests that the statistical form (known as 'aleatory' probability) is de re, and the other is de dicto.
Epistemological probability based either on logical implications or coherent judgments
     Full Idea: Epistemological probability is torn between Keynes etc saying it depends on the strength of logical implication, and Ramsey etc saying it is personal judgement which is subject to strong rules of internal coherence.
     From: Ian Hacking (The Emergence of Probability [1975], Ch.2)
     A reaction: See Idea 7449 for epistemological probability. My immediate intuition is that the Ramsey approach sounds much more plausible. In real life there are too many fine-grained particulars involved for straight implication to settle a probability.
13. Knowledge Criteria / B. Internal Justification / 3. Evidentialism / a. Evidence
In the medieval view, only deduction counted as true evidence
     Full Idea: In the medieval view, evidence short of deduction was not really evidence at all.
     From: Ian Hacking (The Emergence of Probability [1975], Ch.3)
     A reaction: Hacking says the modern concept of evidence comes with probability in the 17th century. That might make it one of the most important ideas ever thought of, allowing us to abandon certainties and live our lives in a more questioning way.
Formerly evidence came from people; the new idea was that things provided evidence
     Full Idea: In the medieval view, people provided the evidence of testimony and of authority. What was lacking was the seventeenth century idea of the evidence provided by things.
     From: Ian Hacking (The Emergence of Probability [1975], Ch.4)
     A reaction: A most intriguing distinction, which seems to imply a huge shift in world-view. The culmination of this is Peirce's pragmatism, in Idea 6948, of which I strongly approve.
14. Science / A. Basis of Science / 3. Experiment
An experiment is a test, or an adventure, or a diagnosis, or a dissection
     Full Idea: An experiment is a test (if T, then E implies R, so try E, and if R follows, T seems right), an adventure (no theory, but try things), a diagnosis (reading the signs), or a dissection (taking apart).
     From: report of Ian Hacking (The Emergence of Probability [1975], Ch.4) by PG - Db (ideas)
     A reaction: A nice analysis. The Greeks did diagnosis, then the alchemists tried adventures, then Vesalius began dissections, then the followers of Bacon concentrated on the test, setting up controlled conditions. 'If you don't believe it, try it yourself'.
14. Science / D. Explanation / 2. Types of Explanation / a. Types of explanation
Follow maths for necessary truths, and jurisprudence for contingent truths
     Full Idea: Mathematics is the model for reasoning about necessary truths, but jurisprudence must be our model when we deliberate about contingencies.
     From: Ian Hacking (The Emergence of Probability [1975], Ch.10)
     A reaction: Interesting. Certainly huge thinking, especially since the Romans, has gone into the law, and creating rules of evidence. Maybe all philosophers should study law and mathematics?