Combining Philosophers

All the ideas for Mark Sainsbury, Thomas Grundmann and Ian Hacking

unexpand these ideas     |    start again     |     specify just one area for these philosophers


37 ideas

1. Philosophy / C. History of Philosophy / 4. Later European Philosophy / b. Seventeenth century philosophy
Gassendi is the first great empiricist philosopher [Hacking]
     Full Idea: Gassendi is the first in the great line of empiricist philosophers that gradually came to dominate European thought.
     From: Ian Hacking (The Emergence of Probability [1975], Ch.5)
     A reaction: Epicurus, of course, was clearly an empiricist. British readers should note that Gassendi was not British.
2. Reason / D. Definition / 3. Types of Definition
A decent modern definition should always imply a semantics [Hacking]
     Full Idea: Today we expect that anything worth calling a definition should imply a semantics.
     From: Ian Hacking (What is Logic? [1979], §10)
     A reaction: He compares this with Gentzen 1935, who was attempting purely syntactic definitions of the logical connectives.
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / d. Basic theorems of PL
'Thinning' ('dilution') is the key difference between deduction (which allows it) and induction [Hacking]
     Full Idea: 'Dilution' (or 'Thinning') provides an essential contrast between deductive and inductive reasoning; for the introduction of new premises may spoil an inductive inference.
     From: Ian Hacking (What is Logic? [1979], §06.2)
     A reaction: That is, inductive logic (if there is such a thing) is clearly non-monotonic, whereas classical inductive logic is monotonic.
Gentzen's Cut Rule (or transitivity of deduction) is 'If A |- B and B |- C, then A |- C' [Hacking]
     Full Idea: If A |- B and B |- C, then A |- C. This generalises to: If Γ|-A,Θ and Γ,A |- Θ, then Γ |- Θ. Gentzen called this 'cut'. It is the transitivity of a deduction.
     From: Ian Hacking (What is Logic? [1979], §06.3)
     A reaction: I read the generalisation as 'If A can be either a premise or a conclusion, you can bypass it'. The first version is just transitivity (which by-passes the middle step).
Only Cut reduces complexity, so logic is constructive without it, and it can be dispensed with [Hacking]
     Full Idea: Only the cut rule can have a conclusion that is less complex than its premises. Hence when cut is not used, a derivation is quite literally constructive, building up from components. Any theorem obtained by cut can be obtained without it.
     From: Ian Hacking (What is Logic? [1979], §08)
5. Theory of Logic / A. Overview of Logic / 4. Pure Logic
The various logics are abstractions made from terms like 'if...then' in English [Hacking]
     Full Idea: I don't believe English is by nature classical or intuitionistic etc. These are abstractions made by logicians. Logicians attend to numerous different objects that might be served by 'If...then', like material conditional, strict or relevant implication.
     From: Ian Hacking (What is Logic? [1979], §15)
     A reaction: The idea that they are 'abstractions' is close to my heart. Abstractions from what? Surely 'if...then' has a standard character when employed in normal conversation?
5. Theory of Logic / A. Overview of Logic / 5. First-Order Logic
First-order logic is the strongest complete compact theory with Löwenheim-Skolem [Hacking]
     Full Idea: First-order logic is the strongest complete compact theory with a Löwenheim-Skolem theorem.
     From: Ian Hacking (What is Logic? [1979], §13)
A limitation of first-order logic is that it cannot handle branching quantifiers [Hacking]
     Full Idea: Henkin proved that there is no first-order treatment of branching quantifiers, which do not seem to involve any idea that is fundamentally different from ordinary quantification.
     From: Ian Hacking (What is Logic? [1979], §13)
     A reaction: See Hacking for an example of branching quantifiers. Hacking is impressed by this as a real limitation of the first-order logic which he generally favours.
5. Theory of Logic / A. Overview of Logic / 7. Second-Order Logic
Second-order completeness seems to need intensional entities and possible worlds [Hacking]
     Full Idea: Second-order logic has no chance of a completeness theorem unless one ventures into intensional entities and possible worlds.
     From: Ian Hacking (What is Logic? [1979], §13)
5. Theory of Logic / E. Structures of Logic / 2. Logical Connectives / a. Logical connectives
With a pure notion of truth and consequence, the meanings of connectives are fixed syntactically [Hacking]
     Full Idea: My doctrine is that the peculiarity of the logical constants resides precisely in that given a certain pure notion of truth and consequence, all the desirable semantic properties of the constants are determined by their syntactic properties.
     From: Ian Hacking (What is Logic? [1979], §09)
     A reaction: He opposes this to Peacocke 1976, who claims that the logical connectives are essentially semantic in character, concerned with the preservation of truth.
5. Theory of Logic / E. Structures of Logic / 4. Variables in Logic
Perhaps variables could be dispensed with, by arrows joining places in the scope of quantifiers [Hacking]
     Full Idea: For some purposes the variables of first-order logic can be regarded as prepositions and place-holders that could in principle be dispensed with, say by a system of arrows indicating what places fall in the scope of which quantifier.
     From: Ian Hacking (What is Logic? [1979], §11)
     A reaction: I tend to think of variables as either pronouns, or as definite descriptions, or as temporary names, but not as prepositions. Must address this new idea...
5. Theory of Logic / F. Referring in Logic / 1. Naming / e. Empty names
It is best to say that a name designates iff there is something for it to designate [Sainsbury]
     Full Idea: It is better to say that 'For all x ("Hesperus" stands for x iff x = Hesperus)', than to say '"Hesperus" stands for Hesperus', since then the expression can be a name with no bearer (e.g. "Vulcan").
     From: Mark Sainsbury (The Essence of Reference [2006], 18.2)
     A reaction: In cases where it is unclear whether the name actually designates something, it seems desirable that the name is at least allowed to function semantically.
5. Theory of Logic / F. Referring in Logic / 2. Descriptions / b. Definite descriptions
Definite descriptions may not be referring expressions, since they can fail to refer [Sainsbury]
     Full Idea: Almost everyone agrees that intelligible definite descriptions may lack a referent; this has historically been a reason for not counting them among referring expressions.
     From: Mark Sainsbury (The Essence of Reference [2006], 18.2)
     A reaction: One might compare indexicals such as 'I', which may be incapable of failing to refer when spoken. However 'look at that!' frequently fails to communicate reference.
Definite descriptions are usually rigid in subject, but not in predicate, position [Sainsbury]
     Full Idea: Definite descriptions used with referential intentions (usually in subject position) are normally rigid, ..but in predicate position they are normally not rigid, because there is no referential intention.
     From: Mark Sainsbury (The Essence of Reference [2006], 18.5)
     A reaction: 'The man in the blue suit is the President' seems to fit, but 'The President is the head of state' doesn't. Seems roughly right, but language is always too complex for philosophers.
5. Theory of Logic / J. Model Theory in Logic / 3. Löwenheim-Skolem Theorems
If it is a logic, the Löwenheim-Skolem theorem holds for it [Hacking]
     Full Idea: A Löwenheim-Skolem theorem holds for anything which, on my delineation, is a logic.
     From: Ian Hacking (What is Logic? [1979], §13)
     A reaction: I take this to be an unusually conservative view. Shapiro is the chap who can give you an alternative view of these things, or Boolos.
7. Existence / D. Theories of Reality / 10. Vagueness / b. Vagueness of reality
If 'red' is vague, then membership of the set of red things is vague, so there is no set of red things [Sainsbury]
     Full Idea: Sets have sharp boundaries, or are sharp objects; an object either definitely belongs to a set, or it does not. But 'red' is vague; there objects which are neither definitely red nor definitely not red. Hence there is no set of red things.
     From: Mark Sainsbury (Concepts without Boundaries [1990], §2)
     A reaction: Presumably that will entail that there IS a set of things which can be described as 'definitely red'. If we describe something as 'definitely having a hint of red about it', will that put it in a set? In fact will the applicability of 'definitely' do?
7. Existence / E. Categories / 2. Categorisation
We should abandon classifying by pigeon-holes, and classify around paradigms [Sainsbury]
     Full Idea: We must reject the classical picture of classification by pigeon-holes, and think in other terms: classifying can be, and often is, clustering round paradigms.
     From: Mark Sainsbury (Concepts without Boundaries [1990], §8)
     A reaction: His conclusion to a discussion of the problem of vagueness, where it is identified with concepts which have no boundaries. Pigeon-holes are a nice exemplar of the Enlightenment desire to get everything right. I prefer Aristotle's categories, Idea 3311.
9. Objects / B. Unity of Objects / 3. Unity Problems / e. Vague objects
Vague concepts are concepts without boundaries [Sainsbury]
     Full Idea: If a word is vague, there are or could be borderline cases, but non-vague expressions can also have borderline cases. The essence of vagueness is to be found in the idea vague concepts are concepts without boundaries.
     From: Mark Sainsbury (Concepts without Boundaries [1990], Intro)
     A reaction: He goes on to say that vague concepts are not embodied in clear cut sets, which is what gives us our notion of a boundary. So what is vague is 'membership'. You are either a member of a club or not, but when do you join the 'middle-aged'?
If concepts are vague, people avoid boundaries, can't spot them, and don't want them [Sainsbury]
     Full Idea: Vague concepts are boundaryless, ...and the manifestations are an unwillingness to draw any such boundaries, the impossibility of identifying such boundaries, and needlessness and even disutility of such boundaries.
     From: Mark Sainsbury (Concepts without Boundaries [1990], §5)
     A reaction: People have a very fine-tuned notion of whether the sharp boundary of a concept is worth discussing. The interesting exception are legal people, who are often forced to find precision where everyone else hates it. Who deserves to inherit the big house?
Boundaryless concepts tend to come in pairs, such as child/adult, hot/cold [Sainsbury]
     Full Idea: Boundaryless concepts tend to come in systems of contraries: opposed pairs like child/adult, hot/cold, weak/strong, true/false, and complex systems of colour terms. ..Only a contrast with 'adult' will show what 'child' excludes.
     From: Mark Sainsbury (Concepts without Boundaries [1990], §5)
     A reaction: This might be expected. It all comes down to the sorites problem, of when one thing turns into something else. If it won't merge into another category, then presumably the isolated concept stays applicable (until reality terminates it? End of sheep..).
10. Modality / B. Possibility / 6. Probability
Probability was fully explained between 1654 and 1812 [Hacking]
     Full Idea: There is hardly any history of probability to record before Pascal (1654), and the whole subject is very well understood after Laplace (1812).
     From: Ian Hacking (The Emergence of Probability [1975], Ch.1)
     A reaction: An interesting little pointer on the question of whether the human race is close to exhausting all the available intellectual problems. What then?
Probability is statistical (behaviour of chance devices) or epistemological (belief based on evidence) [Hacking]
     Full Idea: Probability has two aspects: the degree of belief warranted by evidence, and the tendency displayed by some chance device to produce stable relative frequencies. These are the epistemological and statistical aspects of the subject.
     From: Ian Hacking (The Emergence of Probability [1975], Ch.1)
     A reaction: The most basic distinction in the subject. Later (p.124) he suggests that the statistical form (known as 'aleatory' probability) is de re, and the other is de dicto.
Epistemological probability based either on logical implications or coherent judgments [Hacking]
     Full Idea: Epistemological probability is torn between Keynes etc saying it depends on the strength of logical implication, and Ramsey etc saying it is personal judgement which is subject to strong rules of internal coherence.
     From: Ian Hacking (The Emergence of Probability [1975], Ch.2)
     A reaction: See Idea 7449 for epistemological probability. My immediate intuition is that the Ramsey approach sounds much more plausible. In real life there are too many fine-grained particulars involved for straight implication to settle a probability.
11. Knowledge Aims / B. Certain Knowledge / 3. Fallibilism
Indefeasibility does not imply infallibility [Grundmann]
     Full Idea: Infallibility does not follow from indefeasibility.
     From: Thomas Grundmann (Defeasibility Theory [2011], 'Significance')
     A reaction: If very little evidence exists then this could clearly be the case. It is especially true of historical and archaeological evidence.
13. Knowledge Criteria / A. Justification Problems / 1. Justification / c. Defeasibility
Can a defeater itself be defeated? [Grundmann]
     Full Idea: Can the original justification of a belief be regained through a successful defeat of a defeater?
     From: Thomas Grundmann (Defeasibility Theory [2011], 'Defeater-Defs')
     A reaction: [Jäger 2005 addresses this] I would have thought the answer is yes. I aspire to coherent justifications, so I don't see justifications as a chain of defeat and counter-defeat, but as collective groups of support and challenge.
Simple reliabilism can't cope with defeaters of reliably produced beliefs [Grundmann]
     Full Idea: An unmodified reliabilism does not accommodate defeaters, and surely there can be defeaters against reliably produced beliefs?
     From: Thomas Grundmann (Defeasibility Theory [2011], 'Defeaters')
     A reaction: [He cites Bonjour 1980] Reliabilism has plenty of problems anyway, since a generally reliable process can obviously occasionally produce a bad result. 20:20 vision is not perfect vision. Internalist seem to like defeaters.
You can 'rebut' previous beliefs, 'undercut' the power of evidence, or 'reason-defeat' the truth [Grundmann]
     Full Idea: There are 'rebutting' defeaters against the truth of a previously justified belief, 'undercutting' defeaters against the power of the evidence, and 'reason-defeating' defeaters against the truth of the reason for the belief.
     From: Thomas Grundmann (Defeasibility Theory [2011], 'How')
     A reaction: That is (I think) that you can defeat the background, the likelihood, or the truth. He cites Pollock 1986, and implies that these are standard distinctions about defeaters.
Defeasibility theory needs to exclude defeaters which are true but misleading [Grundmann]
     Full Idea: Advocates of the defeasibility theory have tried to exclude true pieces of information that are misleading defeaters.
     From: Thomas Grundmann (Defeasibility Theory [2011], 'What')
     A reaction: He gives as an example the genuine news of a claim that the suspect has a twin.
Knowledge requires that there are no facts which would defeat its justification [Grundmann]
     Full Idea: The 'defeasibility theory' of knowledge claims that knowledge is only present if there are no facts that - if they were known - would be genuine defeaters of the relevant justification.
     From: Thomas Grundmann (Defeasibility Theory [2011], 'What')
     A reaction: Something not right here. A genuine defeater would ensure the proposition was false, so it would simply fail the truth test. So we need a 'defeater' for a truth, which must therefore by definition be misleading. Many qualifications have to be invoked.
13. Knowledge Criteria / B. Internal Justification / 3. Evidentialism / a. Evidence
In the medieval view, only deduction counted as true evidence [Hacking]
     Full Idea: In the medieval view, evidence short of deduction was not really evidence at all.
     From: Ian Hacking (The Emergence of Probability [1975], Ch.3)
     A reaction: Hacking says the modern concept of evidence comes with probability in the 17th century. That might make it one of the most important ideas ever thought of, allowing us to abandon certainties and live our lives in a more questioning way.
Formerly evidence came from people; the new idea was that things provided evidence [Hacking]
     Full Idea: In the medieval view, people provided the evidence of testimony and of authority. What was lacking was the seventeenth century idea of the evidence provided by things.
     From: Ian Hacking (The Emergence of Probability [1975], Ch.4)
     A reaction: A most intriguing distinction, which seems to imply a huge shift in world-view. The culmination of this is Peirce's pragmatism, in Idea 6948, of which I strongly approve.
13. Knowledge Criteria / B. Internal Justification / 4. Foundationalism / b. Basic beliefs
'Moderate' foundationalism has basic justification which is defeasible [Grundmann]
     Full Idea: Theories that combine basic justification with the defeasibility of this justification are referred to as 'moderate' foundationalism.
     From: Thomas Grundmann (Defeasibility Theory [2011], 'Significance')
     A reaction: I could be more sympathetic to this sort of foundationalism. But it begins to sound more like Neurath's boat (see Quine) than like Descartes' metaphor of building foundations.
14. Science / A. Basis of Science / 3. Experiment
An experiment is a test, or an adventure, or a diagnosis, or a dissection [Hacking, by PG]
     Full Idea: An experiment is a test (if T, then E implies R, so try E, and if R follows, T seems right), an adventure (no theory, but try things), a diagnosis (reading the signs), or a dissection (taking apart).
     From: report of Ian Hacking (The Emergence of Probability [1975], Ch.4) by PG - Db (ideas)
     A reaction: A nice analysis. The Greeks did diagnosis, then the alchemists tried adventures, then Vesalius began dissections, then the followers of Bacon concentrated on the test, setting up controlled conditions. 'If you don't believe it, try it yourself'.
14. Science / D. Explanation / 2. Types of Explanation / a. Types of explanation
Follow maths for necessary truths, and jurisprudence for contingent truths [Hacking]
     Full Idea: Mathematics is the model for reasoning about necessary truths, but jurisprudence must be our model when we deliberate about contingencies.
     From: Ian Hacking (The Emergence of Probability [1975], Ch.10)
     A reaction: Interesting. Certainly huge thinking, especially since the Romans, has gone into the law, and creating rules of evidence. Maybe all philosophers should study law and mathematics?
19. Language / B. Reference / 3. Direct Reference / b. Causal reference
A new usage of a name could arise from a mistaken baptism of nothing [Sainsbury]
     Full Idea: A baptism which, perhaps through some radical mistake, is the baptism of nothing, is as good a propagator of a new use as a baptism of an object.
     From: Mark Sainsbury (The Essence of Reference [2006], 18.3)
     A reaction: An obvious example might be the Loch Ness Monster. There is something intuitively wrong about saying that physical objects are actually part of linguistic meaning or reference. I am not a meaning!
19. Language / B. Reference / 5. Speaker's Reference
Even a quantifier like 'someone' can be used referentially [Sainsbury]
     Full Idea: A large range of expressions can be used with referential intentions, including quantifier phrases (as in 'someone has once again failed to close the door properly').
     From: Mark Sainsbury (The Essence of Reference [2006], 18.5)
     A reaction: This is the pragmatic aspect of reference, where it can be achieved by all sorts of means. But are quantifiers inherently referential in their semantic function? Some of each, it seems.
26. Natural Theory / A. Speculations on Nature / 3. Natural Function
Things are thought to have a function, even when they can't perform them [Sainsbury]
     Full Idea: On one common use of the notion of a function, something can possess a function which it does not, or even cannot, perform. A malformed heart is to pump blood, even if such a heart cannot in fact pump blood.
     From: Mark Sainsbury (The Essence of Reference [2006], 18.2)
     A reaction: One might say that the heart in a dead body had the function of pumping blood, but does it still have that function? Do I have the function of breaking the world 100 metres record, even though I can't quite manage it? Not that simple.