Combining Philosophers

All the ideas for H.Putnam/P.Oppenheim, Stephen Read and E.J. Lemmon

unexpand these ideas     |    start again     |     specify just one area for these philosophers


102 ideas

4. Formal Logic / B. Propositional Logic PL / 1. Propositional Logic
'Contradictory' propositions always differ in truth-value [Lemmon]
     Full Idea: Two propositions are 'contradictory' if they are never both true and never both false either, which means that ¬(A↔B) is a tautology.
     From: E.J. Lemmon (Beginning Logic [1965], 2.3)
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / a. Symbols of PL
We write the conditional 'if P (antecedent) then Q (consequent)' as P→Q [Lemmon]
     Full Idea: We write 'if P then Q' as P→Q. This is called a 'conditional', with P as its 'antecedent', and Q as its 'consequent'.
     From: E.J. Lemmon (Beginning Logic [1965], 1.2)
     A reaction: P→Q can also be written as ¬P∨Q.
That proposition that either P or Q is their 'disjunction', written P∨Q [Lemmon]
     Full Idea: If P and Q are any two propositions, the proposition that either P or Q is called the 'disjunction' of P and Q, and is written P∨Q.
     From: E.J. Lemmon (Beginning Logic [1965], 1.3)
     A reaction: This is inclusive-or (meaning 'P, or Q, or both'), and not exlusive-or (Boolean XOR), which means 'P, or Q, but not both'. The ∨ sign is sometimes called 'vel' (Latin).
We write the 'negation' of P (not-P) as ¬ [Lemmon]
     Full Idea: We write 'not-P' as ¬P. This is called the 'negation' of P. The 'double negation' of P (not not-P) would be written as ¬¬P.
     From: E.J. Lemmon (Beginning Logic [1965], 1.2)
     A reaction: Lemmons use of -P is no longer in use for 'not'. A tilde sign (squiggle) is also used for 'not', but some interpreters give that a subtly different meaning (involving vagueness). The sign ¬ is sometimes called 'hook' or 'corner'.
We write 'P if and only if Q' as P↔Q; it is also P iff Q, or (P→Q)∧(Q→P) [Lemmon]
     Full Idea: We write 'P if and only if Q' as P↔Q. It is called the 'biconditional', often abbreviate in writing as 'iff'. It also says that P is both sufficient and necessary for Q, and may be written out in full as (P→Q)∧(Q→P).
     From: E.J. Lemmon (Beginning Logic [1965], 1.4)
     A reaction: If this symbol is found in a sequence, the first move in a proof is to expand it to the full version.
If A and B are 'interderivable' from one another we may write A -||- B [Lemmon]
     Full Idea: If we say that A and B are 'interderivable' from one another (that is, A |- B and B |- A), then we may write A -||- B.
     From: E.J. Lemmon (Beginning Logic [1965], 1.5)
That proposition that both P and Q is their 'conjunction', written P∧Q [Lemmon]
     Full Idea: If P and Q are any two propositions, the proposition that both P and Q is called the 'conjunction' of P and Q, and is written P∧Q.
     From: E.J. Lemmon (Beginning Logic [1965], 1.3)
     A reaction: [I use the more fashionable inverted-v '∧', rather than Lemmon's '&', which no longer seems to be used] P∧Q can also be defined as ¬(¬P∨¬Q)
The sign |- may be read as 'therefore' [Lemmon]
     Full Idea: I introduce the sign |- to mean 'we may validly conclude'. To call it the 'assertion sign' is misleading. It may conveniently be read as 'therefore'.
     From: E.J. Lemmon (Beginning Logic [1965], 1.2)
     A reaction: [Actually no gap between the vertical and horizontal strokes of the sign] As well as meaning 'assertion', it may also mean 'it is a theorem that' (with no proof shown).
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / b. Terminology of PL
A 'well-formed formula' follows the rules for variables, ¬, →, ∧, ∨, and ↔ [Lemmon]
     Full Idea: A 'well-formed formula' of the propositional calculus is a sequence of symbols which follows the rules for variables, ¬, →, ∧, ∨, and ↔.
     From: E.J. Lemmon (Beginning Logic [1965], 2.1)
The 'scope' of a connective is the connective, the linked formulae, and the brackets [Lemmon]
     Full Idea: The 'scope' of a connective in a certain formula is the formulae linked by the connective, together with the connective itself and the (theoretically) encircling brackets
     From: E.J. Lemmon (Beginning Logic [1965], 2.1)
A 'substitution-instance' is a wff formed by consistent replacing variables with wffs [Lemmon]
     Full Idea: A 'substitution-instance' is a wff which results by replacing one or more variables throughout with the same wffs (the same wff replacing each variable).
     From: E.J. Lemmon (Beginning Logic [1965], 2.2)
A wff is 'inconsistent' if all assignments to variables result in the value F [Lemmon]
     Full Idea: If a well-formed formula of propositional calculus takes the value F for all possible assignments of truth-values to its variables, it is said to be 'inconsistent'.
     From: E.J. Lemmon (Beginning Logic [1965], 2.3)
'Contrary' propositions are never both true, so that ¬(A∧B) is a tautology [Lemmon]
     Full Idea: If A and B are expressible in propositional calculus notation, they are 'contrary' if they are never both true, which may be tested by the truth-table for ¬(A∧B), which is a tautology if they are contrary.
     From: E.J. Lemmon (Beginning Logic [1965], 2.3)
Two propositions are 'equivalent' if they mirror one another's truth-value [Lemmon]
     Full Idea: Two propositions are 'equivalent' if whenever A is true B is true, and whenever B is true A is true, in which case A↔B is a tautology.
     From: E.J. Lemmon (Beginning Logic [1965], 2.3)
A wff is 'contingent' if produces at least one T and at least one F [Lemmon]
     Full Idea: If a well-formed formula of propositional calculus takes at least one T and at least one F for all the assignments of truth-values to its variables, it is said to be 'contingent'.
     From: E.J. Lemmon (Beginning Logic [1965], 2.3)
'Subcontrary' propositions are never both false, so that A∨B is a tautology [Lemmon]
     Full Idea: If A and B are expressible in propositional calculus notation, they are 'subcontrary' if they are never both false, which may be tested by the truth-table for A∨B, which is a tautology if they are subcontrary.
     From: E.J. Lemmon (Beginning Logic [1965], 2.3)
A 'implies' B if B is true whenever A is true (so that A→B is tautologous) [Lemmon]
     Full Idea: One proposition A 'implies' a proposition B if whenever A is true B is true (but not necessarily conversely), which is only the case if A→B is tautologous. Hence B 'is implied' by A.
     From: E.J. Lemmon (Beginning Logic [1965], 2.3)
A wff is a 'tautology' if all assignments to variables result in the value T [Lemmon]
     Full Idea: If a well-formed formula of propositional calculus takes the value T for all possible assignments of truth-values to its variables, it is said to be a 'tautology'.
     From: E.J. Lemmon (Beginning Logic [1965], 2.3)
A 'theorem' is the conclusion of a provable sequent with zero assumptions [Lemmon]
     Full Idea: A 'theorem' of logic is the conclusion of a provable sequent in which the number of assumptions is zero.
     From: E.J. Lemmon (Beginning Logic [1965], 2.2)
     A reaction: This is what Quine and others call a 'logical truth'.
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / c. Derivation rules of PL
∧I: Given A and B, we may derive A∧B [Lemmon]
     Full Idea: And-Introduction (&I): Given A and B, we may derive A∧B as conclusion. This depends on their previous assumptions.
     From: E.J. Lemmon (Beginning Logic [1965], 1.5)
CP: Given a proof of B from A as assumption, we may derive A→B [Lemmon]
     Full Idea: Conditional Proof (CP): Given a proof of B from A as assumption, we may derive A→B as conclusion, on the remaining assumptions (if any).
     From: E.J. Lemmon (Beginning Logic [1965], 1.5)
MPP: Given A and A→B, we may derive B [Lemmon]
     Full Idea: Modus Ponendo Ponens (MPP): Given A and A→B, we may derive B as a conclusion. B will rest on any assumptions that have been made.
     From: E.J. Lemmon (Beginning Logic [1965], 1.5)
RAA: If assuming A will prove B∧¬B, then derive ¬A [Lemmon]
     Full Idea: Reduction ad Absurdum (RAA): Given a proof of B∧¬B from A as assumption, we may derive ¬A as conclusion, depending on the remaining assumptions (if any).
     From: E.J. Lemmon (Beginning Logic [1965], 1.5)
MTT: Given ¬B and A→B, we derive ¬A [Lemmon]
     Full Idea: Modus Tollendo Tollens (MTT): Given ¬B and A→B, we derive ¬A as a conclusion. ¬A depends on any assumptions that have been made
     From: E.J. Lemmon (Beginning Logic [1965], 1.5)
∨I: Given either A or B separately, we may derive A∨B [Lemmon]
     Full Idea: Or-Introduction (∨I): Given either A or B separately, we may derive A∨B as conclusion. This depends on the assumption of the premisses.
     From: E.J. Lemmon (Beginning Logic [1965], 1.5)
Three traditional names of rules are 'Simplification', 'Addition' and 'Disjunctive Syllogism' [Read]
     Full Idea: Three traditional names for rules are 'Simplification' (P from 'P and Q'), 'Addition' ('P or Q' from P), and 'Disjunctive Syllogism' (Q from 'P or Q' and 'not-P').
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
∨E: Derive C from A∨B, if C can be derived both from A and from B [Lemmon]
     Full Idea: Or-Elimination (∨E): Given A∨B, we may derive C if it is proved from A as assumption and from B as assumption. This will also depend on prior assumptions.
     From: E.J. Lemmon (Beginning Logic [1965], 1.5)
DN: Given A, we may derive ¬¬A [Lemmon]
     Full Idea: Double Negation (DN): Given A, we may derive ¬¬A as a conclusion, and vice versa. The conclusion depends on the assumptions of the premiss.
     From: E.J. Lemmon (Beginning Logic [1965], 1.5)
A: we may assume any proposition at any stage [Lemmon]
     Full Idea: Assumptions (A): any proposition may be introduced at any stage of a proof.
     From: E.J. Lemmon (Beginning Logic [1965], 1.5)
∧E: Given A∧B, we may derive either A or B separately [Lemmon]
     Full Idea: And-Elimination (∧E): Given A∧B, we may derive either A or B separately. The conclusions will depend on the assumptions of the premiss.
     From: E.J. Lemmon (Beginning Logic [1965], 1.5)
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / d. Basic theorems of PL
'Modus ponendo tollens' (MPT) says P, ¬(P ∧ Q) |- ¬Q [Lemmon]
     Full Idea: 'Modus ponendo tollens' (MPT) says that if the negation of a conjunction holds and also one of its conjuncts, then the negation of the other conjunct holds. Thus P, ¬(P ∧ Q) |- ¬Q may be introduced as a theorem.
     From: E.J. Lemmon (Beginning Logic [1965], 2.2)
     A reaction: Unlike Modus Ponens and Modus Tollens, this is a derived rule.
We can change conditionals into negated conjunctions with P→Q -||- ¬(P ∧ ¬Q) [Lemmon]
     Full Idea: The proof that P→Q -||- ¬(P ∧ ¬Q) is useful for enabling us to change conditionals into negated conjunctions
     From: E.J. Lemmon (Beginning Logic [1965], 2.2)
We can change conditionals into disjunctions with P→Q -||- ¬P ∨ Q [Lemmon]
     Full Idea: The proof that P→Q -||- ¬P ∨ Q is useful for enabling us to change conditionals into disjunctions.
     From: E.J. Lemmon (Beginning Logic [1965], 2.2)
De Morgan's Laws make negated conjunctions/disjunctions into non-negated disjunctions/conjunctions [Lemmon]
     Full Idea: The forms of De Morgan's Laws [P∨Q -||- ¬(¬P ∧ ¬Q); ¬(P∨Q) -||- ¬P ∧ ¬Q; ¬(P∧Q) -||- ¬P ∨ ¬Q); P∧Q -||- ¬(¬P∨¬Q)] transform negated conjunctions and disjunctions into non-negated disjunctions and conjunctions respectively.
     From: E.J. Lemmon (Beginning Logic [1965], 2.2)
The Distributive Laws can rearrange a pair of conjunctions or disjunctions [Lemmon]
     Full Idea: The Distributive Laws say that P ∧ (Q∨R) -||- (P∧Q) ∨ (P∧R), and that P ∨ (Q∨R) -||- (P∨Q) ∧ (P∨R)
     From: E.J. Lemmon (Beginning Logic [1965], 2.2)
We can change conjunctions into negated conditionals with P→Q -||- ¬(P → ¬Q) [Lemmon]
     Full Idea: The proof that P∧Q -||- ¬(P → ¬Q) is useful for enabling us to change conjunctions into negated conditionals.
     From: E.J. Lemmon (Beginning Logic [1965], 2.2)
'Modus tollendo ponens' (MTP) says ¬P, P ∨ Q |- Q [Lemmon]
     Full Idea: 'Modus tollendo ponens' (MTP) says that if a disjunction holds and also the negation of one of its disjuncts, then the other disjunct holds. Thus ¬P, P ∨ Q |- Q may be introduced as a theorem.
     From: E.J. Lemmon (Beginning Logic [1965], 2.2)
     A reaction: Unlike Modus Ponens and Modus Tollens, this is a derived rule.
4. Formal Logic / B. Propositional Logic PL / 3. Truth Tables
Truth-tables are good for showing invalidity [Lemmon]
     Full Idea: The truth-table approach enables us to show the invalidity of argument-patterns, as well as their validity.
     From: E.J. Lemmon (Beginning Logic [1965], 2.4)
A truth-table test is entirely mechanical, but this won't work for more complex logic [Lemmon]
     Full Idea: A truth-table test is entirely mechanical, ..and in propositional logic we can even generate proofs mechanically for tautological sequences, ..but this mechanical approach breaks down with predicate calculus, and proof-discovery is an imaginative process.
     From: E.J. Lemmon (Beginning Logic [1965], 2.5)
4. Formal Logic / B. Propositional Logic PL / 4. Soundness of PL
If any of the nine rules of propositional logic are applied to tautologies, the result is a tautology [Lemmon]
     Full Idea: If any application of the nine derivation rules of propositional logic is made on tautologous sequents, we have demonstrated that the result is always a tautologous sequent. Thus the system is consistent.
     From: E.J. Lemmon (Beginning Logic [1965], 2.4)
     A reaction: The term 'sound' tends to be used now, rather than 'consistent'. See Lemmon for the proofs of each of the nine rules.
4. Formal Logic / B. Propositional Logic PL / 5. Completeness of PL
Propositional logic is complete, since all of its tautologous sequents are derivable [Lemmon]
     Full Idea: A logical system is complete is all expressions of a specified kind are derivable in it. If we specify tautologous sequent-expressions, then propositional logic is complete, because we can show that all tautologous sequents are derivable.
     From: E.J. Lemmon (Beginning Logic [1965], 2.5)
     A reaction: [See Lemmon 2.5 for details of the proofs]
4. Formal Logic / C. Predicate Calculus PC / 2. Tools of Predicate Calculus / a. Symbols of PC
Write '(∀x)(...)' to mean 'take any x: then...', and '(∃x)(...)' to mean 'there is an x such that....' [Lemmon]
     Full Idea: Just as '(∀x)(...)' is to mean 'take any x: then....', so we write '(∃x)(...)' to mean 'there is an x such that....'
     From: E.J. Lemmon (Beginning Logic [1965], 3.1)
     A reaction: [Actually Lemmon gives the universal quantifier symbol as '(x)', but the inverted A ('∀') seems to have replaced it these days]
'Gm' says m has property G, and 'Pmn' says m has relation P to n [Lemmon]
     Full Idea: A predicate letter followed by one name expresses a property ('Gm'), and a predicate-letter followed by two names expresses a relation ('Pmn'). We could write 'Pmno' for a complex relation like betweenness.
     From: E.J. Lemmon (Beginning Logic [1965], 3.1)
The 'symbols' are bracket, connective, term, variable, predicate letter, reverse-E [Lemmon]
     Full Idea: I define a 'symbol' (of the predicate calculus) as either a bracket or a logical connective or a term or an individual variable or a predicate-letter or reverse-E (∃).
     From: E.J. Lemmon (Beginning Logic [1965], 4.1)
4. Formal Logic / C. Predicate Calculus PC / 2. Tools of Predicate Calculus / b. Terminology of PC
Our notation uses 'predicate-letters' (for 'properties'), 'variables', 'proper names', 'connectives' and 'quantifiers' [Lemmon]
     Full Idea: Quantifier-notation might be thus: first, render into sentences about 'properties', and use 'predicate-letters' for them; second, introduce 'variables'; third, introduce propositional logic 'connectives' and 'quantifiers'. Plus letters for 'proper names'.
     From: E.J. Lemmon (Beginning Logic [1965], 3.1)
4. Formal Logic / C. Predicate Calculus PC / 2. Tools of Predicate Calculus / c. Derivations rules of PC
Universal Elimination (UE) lets us infer that an object has F, from all things having F [Lemmon]
     Full Idea: Our rule of universal quantifier elimination (UE) lets us infer that any particular object has F from the premiss that all things have F. It is a natural extension of &E (and-elimination), as universal propositions generally affirm a complex conjunction.
     From: E.J. Lemmon (Beginning Logic [1965], 3.2)
With finite named objects, we can generalise with &-Intro, but otherwise we need ∀-Intro [Lemmon]
     Full Idea: If there are just three objects and each has F, then by an extension of &I we are sure everything has F. This is of no avail, however, if our universe is infinitely large or if not all objects have names. We need a new device, Universal Introduction, UI.
     From: E.J. Lemmon (Beginning Logic [1965], 3.2)
UE all-to-one; UI one-to-all; EI arbitrary-to-one; EE proof-to-one [Lemmon]
     Full Idea: Univ Elim UE - if everything is F, then something is F; Univ Intro UI - if an arbitrary thing is F, everything is F; Exist Intro EI - if an arbitrary thing is F, something is F; Exist Elim EE - if a proof needed an object, there is one.
     From: E.J. Lemmon (Beginning Logic [1965], 3.3)
     A reaction: [My summary of Lemmon's four main rules for predicate calculus] This is the natural deduction approach, of trying to present the logic entirely in terms of introduction and elimination rules. See Bostock on that.
Predicate logic uses propositional connectives and variables, plus new introduction and elimination rules [Lemmon]
     Full Idea: In predicate calculus we take over the propositional connectives and propositional variables - but we need additional rules for handling quantifiers: four rules, an introduction and elimination rule for the universal and existential quantifiers.
     From: E.J. Lemmon (Beginning Logic [1965])
     A reaction: This is Lemmon's natural deduction approach (invented by Gentzen), which is largely built on introduction and elimination rules.
Universal elimination if you start with the universal, introduction if you want to end with it [Lemmon]
     Full Idea: The elimination rule for the universal quantifier concerns the use of a universal proposition as a premiss to establish some conclusion, whilst the introduction rule concerns what is required by way of a premiss for a universal proposition as conclusion.
     From: E.J. Lemmon (Beginning Logic [1965], 3.2)
     A reaction: So if you start with the universal, you need to eliminate it, and if you start without it you need to introduce it.
4. Formal Logic / C. Predicate Calculus PC / 2. Tools of Predicate Calculus / d. Universal quantifier ∀
If there is a finite domain and all objects have names, complex conjunctions can replace universal quantifiers [Lemmon]
     Full Idea: If all objects in a given universe had names which we knew and there were only finitely many of them, then we could always replace a universal proposition about that universe by a complex conjunction.
     From: E.J. Lemmon (Beginning Logic [1965], 3.2)
4. Formal Logic / C. Predicate Calculus PC / 2. Tools of Predicate Calculus / e. Existential quantifier ∃
'Some Frenchmen are generous' is rendered by (∃x)(Fx→Gx), and not with the conditional → [Lemmon]
     Full Idea: It is a common mistake to render 'some Frenchmen are generous' by (∃x)(Fx→Gx) rather than the correct (∃x)(Fx&Gx). 'All Frenchmen are generous' is properly rendered by a conditional, and true if there are no Frenchmen.
     From: E.J. Lemmon (Beginning Logic [1965], 3.1)
     A reaction: The existential quantifier implies the existence of an x, but the universal quantifier does not.
4. Formal Logic / D. Modal Logic ML / 3. Modal Logic Systems / a. Systems of modal logic
Necessity is provability in S4, and true in all worlds in S5 [Read]
     Full Idea: In S4 necessity is said to be informal 'provability', and in S5 it is said to be 'true in every possible world'.
     From: Stephen Read (Thinking About Logic [1995], Ch.4)
     A reaction: It seems that the S4 version is proof-theoretic, and the S5 version is semantic.
4. Formal Logic / E. Nonclassical Logics / 4. Fuzzy Logic
There are fuzzy predicates (and sets), and fuzzy quantifiers and modifiers [Read]
     Full Idea: In fuzzy logic, besides fuzzy predicates, which define fuzzy sets, there are also fuzzy quantifiers (such as 'most' and 'few') and fuzzy modifiers (such as 'usually').
     From: Stephen Read (Thinking About Logic [1995], Ch.7)
4. Formal Logic / E. Nonclassical Logics / 6. Free Logic
Same say there are positive, negative and neuter free logics [Read]
     Full Idea: It is normal to classify free logics into three sorts; positive free logics (some propositions with empty terms are true), negative free logics (they are false), and neuter free logics (they lack truth-value), though I find this unhelpful and superficial.
     From: Stephen Read (Thinking About Logic [1995], Ch.5)
4. Formal Logic / F. Set Theory ST / 5. Conceptions of Set / c. Logical sets
Realisms like the full Comprehension Principle, that all good concepts determine sets [Read]
     Full Idea: Hard-headed realism tends to embrace the full Comprehension Principle, that every well-defined concept determines a set.
     From: Stephen Read (Thinking About Logic [1995], Ch.8)
     A reaction: This sort of thing gets you into trouble with Russell's paradox (though that is presumably meant to be excluded somehow by 'well-defined'). There are lots of diluted Comprehension Principles.
5. Theory of Logic / A. Overview of Logic / 4. Pure Logic
If logic is topic-neutral that means it delves into all subjects, rather than having a pure subject matter [Read]
     Full Idea: The topic-neutrality of logic need not mean there is a pure subject matter for logic; rather, that the logician may need to go everywhere, into mathematics and even into metaphysics.
     From: Stephen Read (Formal and Material Consequence [1994], 'Logic')
5. Theory of Logic / A. Overview of Logic / 5. First-Order Logic
Not all validity is captured in first-order logic [Read]
     Full Idea: We must recognise that first-order classical logic is inadequate to describe all valid consequences, that is, all cases in which it is impossible for the premisses to be true and the conclusion false.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
     A reaction: This is despite the fact that first-order logic is 'complete', in the sense that its own truths are all provable.
5. Theory of Logic / A. Overview of Logic / 6. Classical Logic
The non-emptiness of the domain is characteristic of classical logic [Read]
     Full Idea: The non-emptiness of the domain is characteristic of classical logic.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
5. Theory of Logic / A. Overview of Logic / 7. Second-Order Logic
Semantics must precede proof in higher-order logics, since they are incomplete [Read]
     Full Idea: For the realist, study of semantic structures comes before study of proofs. In higher-order logic is has to, for the logics are incomplete.
     From: Stephen Read (Thinking About Logic [1995], Ch.9)
     A reaction: This seems to be an important general observation about any incomplete system, such as Peano arithmetic. You may dream the old rationalist dream of starting from the beginning and proving everything, but you can't. Start with truth and meaning.
5. Theory of Logic / A. Overview of Logic / 8. Logic of Mathematics
We should exclude second-order logic, precisely because it captures arithmetic [Read]
     Full Idea: Those who believe mathematics goes beyond logic use that fact to argue that classical logic is right to exclude second-order logic.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
5. Theory of Logic / B. Logical Consequence / 1. Logical Consequence
Not all arguments are valid because of form; validity is just true premises and false conclusion being impossible [Read]
     Full Idea: Belief that every valid argument is valid in virtue of form is a myth. ..Validity is a question of the impossibility of true premises and false conclusion for whatever reason, and some arguments are materially valid and the reason is not purely logical.
     From: Stephen Read (Formal and Material Consequence [1994], 'Logic')
     A reaction: An example of a non-logical reason is the transitive nature of 'taller than'. Conceptual connections are the usual example, as in 'it's red so it is coloured'. This seems to be a defence of the priority of semantic consequence in logic.
If the logic of 'taller of' rests just on meaning, then logic may be the study of merely formal consequence [Read]
     Full Idea: In 'A is taller than B, and B is taller than C, so A is taller than C' this can been seen as a matter of meaning - it is part of the meaning of 'taller' that it is transitive, but not of logic. Logic is now seen as the study of formal consequence.
     From: Stephen Read (Formal and Material Consequence [1994], 'Reduct')
     A reaction: I think I find this approach quite appealing. Obviously you can reason about taller-than relations, by putting the concepts together like jigsaw pieces, but I tend to think of logic as something which is necessarily implementable on a machine.
Maybe arguments are only valid when suppressed premises are all stated - but why? [Read]
     Full Idea: Maybe some arguments are really only valid when a suppressed premise is made explicit, as when we say that 'taller than' is a transitive concept. ...But what is added by making the hidden premise explicit? It cannot alter the soundness of the argument.
     From: Stephen Read (Formal and Material Consequence [1994], 'Suppress')
A theory of logical consequence is a conceptual analysis, and a set of validity techniques [Read]
     Full Idea: A theory of logical consequence, while requiring a conceptual analysis of consequence, also searches for a set of techniques to determine the validity of particular arguments.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
Logical consequence isn't just a matter of form; it depends on connections like round-square [Read]
     Full Idea: If classical logic insists that logical consequence is just a matter of the form, we fail to include as valid consequences those inferences whose correctness depends on the connections between non-logical terms (such as 'round' and 'square').
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
     A reaction: He suggests that an inference such as 'round, so not square' should be labelled as 'materially valid'.
5. Theory of Logic / B. Logical Consequence / 5. Modus Ponens
In modus ponens the 'if-then' premise contributes nothing if the conclusion follows anyway [Read]
     Full Idea: A puzzle about modus ponens is that the major premise is either false or unnecessary: A, If A then B / so B. If the major premise is true, then B follows from A, so the major premise is redundant. So it is false or not needed, and contributes nothing.
     From: Stephen Read (Formal and Material Consequence [1994], 'Repres')
     A reaction: Not sure which is the 'major premise' here, but it seems to be saying that the 'if A then B' is redundant. If I say 'it's raining so the grass is wet', it seems pointless to slip in the middle the remark that rain implies wet grass. Good point.
5. Theory of Logic / B. Logical Consequence / 8. Material Implication
The paradoxes of material implication are P |- Q → P, and ¬P |- P → Q [Lemmon]
     Full Idea: The paradoxes of material implication are P |- Q → P, and ¬P |- P → Q. That is, since Napoleon was French, then if the moon is blue then Napoleon was French; and since Napoleon was not Chinese, then if Napoleon was Chinese, the moon is blue.
     From: E.J. Lemmon (Beginning Logic [1965], 2.2)
     A reaction: This is why the symbol → does not really mean the 'if...then' of ordinary English. Russell named it 'material implication' to show that it was a distinctively logical operator.
5. Theory of Logic / E. Structures of Logic / 2. Logical Connectives / a. Logical connectives
Logical connectives contain no information, but just record combination relations between facts [Read]
     Full Idea: The logical connectives are useful for bundling information, that B follows from A, or that one of A or B is true. ..They import no information of their own, but serve to record combinations of other facts.
     From: Stephen Read (Formal and Material Consequence [1994], 'Repres')
     A reaction: Anyone who suggests a link between logic and 'facts' gets my vote, so this sounds a promising idea. However, logical truths have a high degree of generality, which seems somehow above the 'facts'.
5. Theory of Logic / E. Structures of Logic / 8. Theories in Logic
A theory is logically closed, which means infinite premisses [Read]
     Full Idea: A 'theory' is any logically closed set of propositions, ..and since any proposition has infinitely many consequences, including all the logical truths, so that theories have infinitely many premisses.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
     A reaction: Read is introducing this as the essential preliminary to an account of the Compactness Theorem, which relates these infinite premisses to the finite.
5. Theory of Logic / G. Quantification / 1. Quantification
Quantifiers are second-order predicates [Read]
     Full Idea: Quantifiers are second-order predicates.
     From: Stephen Read (Thinking About Logic [1995], Ch.5)
     A reaction: [He calls this 'Frege's insight'] They seem to be second-order in Tarski's sense, that they are part of a metalanguage about the sentence, rather than being a part of the sentence.
5. Theory of Logic / G. Quantification / 5. Second-Order Quantification
In second-order logic the higher-order variables range over all the properties of the objects [Read]
     Full Idea: The defining factor of second-order logic is that, while the domain of its individual variables may be arbitrary, the range of the first-order variables is all the properties of the objects in its domain (or, thinking extensionally, of the sets objects).
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
     A reaction: The key point is that the domain is 'all' of the properties. How many properties does an object have. You need to decide whether you believe in sparse or abundant properties (I vote for very sparse indeed).
5. Theory of Logic / I. Semantics of Logic / 3. Logical Truth
A logical truth is the conclusion of a valid inference with no premisses [Read]
     Full Idea: Logical truth is a degenerate, or extreme, case of consequence. A logical truth is the conclusion of a valid inference with no premisses, or a proposition in the premisses of an argument which is unnecessary or may be suppressed.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
5. Theory of Logic / J. Model Theory in Logic / 3. Löwenheim-Skolem Theorems
Any first-order theory of sets is inadequate [Read]
     Full Idea: Any first-order theory of sets is inadequate because of the Löwenheim-Skolem-Tarski property, and the consequent Skolem paradox.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
     A reaction: The limitation is in giving an account of infinities.
5. Theory of Logic / K. Features of Logics / 6. Compactness
Compactness is when any consequence of infinite propositions is the consequence of a finite subset [Read]
     Full Idea: Classical logical consequence is compact, which means that any consequence of an infinite set of propositions (such as a theory) is a consequence of some finite subset of them.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
Compactness does not deny that an inference can have infinitely many premisses [Read]
     Full Idea: Compactness does not deny that an inference can have infinitely many premisses. It can; but classically, it is valid if and only if the conclusion follows from a finite subset of them.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
Compactness blocks the proof of 'for every n, A(n)' (as the proof would be infinite) [Read]
     Full Idea: Compact consequence undergenerates - there are intuitively valid consequences which it marks as invalid, such as the ω-rule, that if A holds of the natural numbers, then 'for every n, A(n)', but the proof of that would be infinite, for each number.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
Compactness makes consequence manageable, but restricts expressive power [Read]
     Full Idea: Compactness is a virtue - it makes the consequence relation more manageable; but it is also a limitation - it limits the expressive power of the logic.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
     A reaction: The major limitation is that wholly infinite proofs are not permitted, as in Idea 10977.
5. Theory of Logic / L. Paradox / 6. Paradoxes in Language / a. The Liar paradox
Self-reference paradoxes seem to arise only when falsity is involved [Read]
     Full Idea: It cannot be self-reference alone that is at fault. Rather, what seems to cause the problems in the paradoxes is the combination of self-reference with falsity.
     From: Stephen Read (Thinking About Logic [1995], Ch.6)
6. Mathematics / A. Nature of Mathematics / 5. The Infinite / d. Actual infinite
Infinite cuts and successors seems to suggest an actual infinity there waiting for us [Read]
     Full Idea: Every potential infinity seems to suggest an actual infinity - e.g. generating successors suggests they are really all there already; cutting the line suggests that the point where the cut is made is already in place.
     From: Stephen Read (Thinking About Logic [1995], Ch.8)
     A reaction: Finding a new gambit in chess suggests it was there waiting for us, but we obviously invented chess. Daft.
6. Mathematics / B. Foundations for Mathematics / 4. Axioms for Number / e. Peano arithmetic 2nd-order
Although second-order arithmetic is incomplete, it can fully model normal arithmetic [Read]
     Full Idea: Second-order arithmetic is categorical - indeed, there is a single formula of second-order logic whose only model is the standard model ω, consisting of just the natural numbers, with all of arithmetic following. It is nevertheless incomplete.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
     A reaction: This is the main reason why second-order logic has a big fan club, despite the logic being incomplete (as well as the arithmetic).
Second-order arithmetic covers all properties, ensuring categoricity [Read]
     Full Idea: Second-order arithmetic can rule out the non-standard models (with non-standard numbers). Its induction axiom crucially refers to 'any' property, which gives the needed categoricity for the models.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
6. Mathematics / B. Foundations for Mathematics / 5. Definitions of Number / g. Von Neumann numbers
Von Neumann numbers are helpful, but don't correctly describe numbers [Read]
     Full Idea: The Von Neumann numbers have a structural isomorphism to the natural numbers - each number is the set of all its predecessors, so 2 is the set of 0 and 1. This helps proofs, but is unacceptable. 2 is not a set with two members, or a member of 3.
     From: Stephen Read (Thinking About Logic [1995], Ch.4)
7. Existence / D. Theories of Reality / 10. Vagueness / d. Vagueness as linguistic
Would a language without vagueness be usable at all? [Read]
     Full Idea: We must ask whether a language without vagueness would be usable at all.
     From: Stephen Read (Thinking About Logic [1995], Ch.7)
     A reaction: Popper makes a similar remark somewhere, with which I heartily agreed. This is the idea of 'spreading the word' over the world, which seems the right way of understanding it.
7. Existence / D. Theories of Reality / 10. Vagueness / f. Supervaluation for vagueness
Supervaluations say there is a cut-off somewhere, but at no particular place [Read]
     Full Idea: The supervaluation approach to vagueness is to construe vague predicates not as ones with fuzzy borderlines and no cut-off, but as having a cut-off somewhere, but in no particular place.
     From: Stephen Read (Thinking About Logic [1995], Ch.7)
     A reaction: Presumably you narrow down the gap by supervaluation, then split the difference to get a definite value.
A 'supervaluation' gives a proposition consistent truth-value for classical assignments [Read]
     Full Idea: A 'supervaluation' says a proposition is true if it is true in all classical extensions of the original partial valuation. Thus 'A or not-A' has no valuation for an empty name, but if 'extended' to make A true or not-true, not-A always has opposite value.
     From: Stephen Read (Thinking About Logic [1995], Ch.5)
Identities and the Indiscernibility of Identicals don't work with supervaluations [Read]
     Full Idea: In supervaluations, the Law of Identity has no value for empty names, and remains so if extended. The Indiscernibility of Identicals also fails if extending it for non-denoting terms, where Fa comes out true and Fb false.
     From: Stephen Read (Thinking About Logic [1995], Ch.5)
9. Objects / A. Existence of Objects / 5. Individuation / d. Individuation by haecceity
A haecceity is a set of individual properties, essential to each thing [Read]
     Full Idea: The haecceitist (a neologism coined by Duns Scotus, pronounced 'hex-ee-it-ist', meaning literally 'thisness') believes that each thing has an individual essence, a set of properties which are essential to it.
     From: Stephen Read (Thinking About Logic [1995], Ch.4)
     A reaction: This seems to be a difference of opinion over whether a haecceity is a set of essential properties, or a bare particular. The key point is that it is unique to each entity.
10. Modality / A. Necessity / 2. Nature of Necessity
Equating necessity with truth in every possible world is the S5 conception of necessity [Read]
     Full Idea: The equation of 'necessity' with 'true in every possible world' is known as the S5 conception, corresponding to the strongest of C.I.Lewis's five modal systems.
     From: Stephen Read (Thinking About Logic [1995], Ch.4)
     A reaction: Are the worlds naturally, or metaphysically, or logically possible?
10. Modality / B. Possibility / 8. Conditionals / a. Conditionals
The point of conditionals is to show that one will accept modus ponens [Read]
     Full Idea: The point of conditionals is to show that one will accept modus ponens.
     From: Stephen Read (Thinking About Logic [1995], Ch.3)
     A reaction: [He attributes this idea to Frank Jackson] This makes the point, against Grice, that the implication of conditionals is not conversational but a matter of logical convention. See Idea 21396 for a very different view.
The standard view of conditionals is that they are truth-functional [Read]
     Full Idea: The standard view of conditionals is that they are truth-functional, that is, that their truth-values are determined by the truth-values of their constituents.
     From: Stephen Read (Thinking About Logic [1995], Ch.3)
Some people even claim that conditionals do not express propositions [Read]
     Full Idea: Some people even claim that conditionals do not express propositions.
     From: Stephen Read (Thinking About Logic [1995], Ch.7)
     A reaction: See Idea 14283, where this appears to have been 'proved' by Lewis, and is not just a view held by some people.
10. Modality / B. Possibility / 8. Conditionals / d. Non-truthfunction conditionals
Conditionals are just a shorthand for some proof, leaving out the details [Read]
     Full Idea: Truth enables us to carry various reports around under certain descriptions ('what Iain said') without all the bothersome detail. Similarly, conditionals enable us to transmit a record of proof without its detail.
     From: Stephen Read (Formal and Material Consequence [1994], 'Repres')
     A reaction: This is his proposed Redundancy Theory of conditionals. It grows out of the problem with Modus Ponens mentioned in Idea 14184. To say that there is always an implied 'proof' seems a large claim.
10. Modality / E. Possible worlds / 1. Possible Worlds / a. Possible worlds
Knowledge of possible worlds is not causal, but is an ontology entailed by semantics [Read]
     Full Idea: The modal Platonist denies that knowledge always depends on a causal relation. The reality of possible worlds is an ontological requirement, to secure the truth-values of modal propositions.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
     A reaction: [Reply to Idea 10982] This seems to be a case of deriving your metaphyics from your semantics, of which David Lewis seems to be guilty, and which strikes me as misguided.
10. Modality / E. Possible worlds / 1. Possible Worlds / c. Possible worlds realism
How can modal Platonists know the truth of a modal proposition? [Read]
     Full Idea: If modal Platonism was true, how could we ever know the truth of a modal proposition?
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
     A reaction: I take this to be very important. Our knowledge of modal truths must depend on our knowledge of the actual world. The best answer seems to involve reference to the 'powers' of the actual world. A reply is in Idea 10983.
10. Modality / E. Possible worlds / 1. Possible Worlds / d. Possible worlds actualism
Actualism is reductionist (to parts of actuality), or moderate realist (accepting real abstractions) [Read]
     Full Idea: There are two main forms of actualism: reductionism, which seeks to construct possible worlds out of some more mundane material; and moderate realism, in which the actual concrete world is contrasted with abstract, but none the less real, possible worlds.
     From: Stephen Read (Thinking About Logic [1995], Ch.4)
     A reaction: I am a reductionist, as I do not take abstractions to be 'real' (precisely because they have been 'abstracted' from the things that are real). I think I will call myself a 'scientific modalist' - we build worlds from possibilities, discovered by science.
10. Modality / E. Possible worlds / 2. Nature of Possible Worlds / c. Worlds as propositions
A possible world is a determination of the truth-values of all propositions of a domain [Read]
     Full Idea: A possible world is a complete determination of the truth-values of all propositions over a certain domain.
     From: Stephen Read (Thinking About Logic [1995], Ch.2)
     A reaction: Even if the domain is very small? Even if the world fitted the logic nicely, but was naturally impossible?
10. Modality / E. Possible worlds / 3. Transworld Objects / c. Counterparts
If worlds are concrete, objects can't be present in more than one, and can only have counterparts [Read]
     Full Idea: If each possible world constitutes a concrete reality, then no object can be present in more than one world - objects may have 'counterparts', but cannot be identical with them.
     From: Stephen Read (Thinking About Logic [1995], Ch.4)
     A reaction: This explains clearly why in Lewis's modal realist scheme he needs counterparts instead of rigid designation. Sounds like a slippery slope. If you say 'Humphrey might have won the election', who are you talking about?
14. Science / D. Explanation / 2. Types of Explanation / j. Explanations by reduction
Six reduction levels: groups, lives, cells, molecules, atoms, particles [Putnam/Oppenheim, by Watson]
     Full Idea: There are six 'reductive levels' in science: social groups, (multicellular) living things, cells, molecules, atoms, and elementary particles.
     From: report of H.Putnam/P.Oppenheim (Unity of Science as a Working Hypothesis [1958]) by Peter Watson - Convergence 10 'Intro'
     A reaction: I have the impression that fields are seen as more fundamental that elementary particles. What is the status of the 'laws' that are supposed to govern these things? What is the status of space and time within this picture?
15. Nature of Minds / C. Capacities of Minds / 3. Abstraction by mind
The mind abstracts ways things might be, which are nonetheless real [Read]
     Full Idea: Ways things might be are real, but only when abstracted from the actual way things are. They are brought out and distinguished by the mind, by abstraction, but are not dependent on mind for their existence.
     From: Stephen Read (Thinking About Logic [1995], Ch.4)
     A reaction: To me this just flatly contradicts itself. The idea that the mind can 'bring something out' by its operations, with the result being then accepted as part of reality is nonsense on stilts. What is real is the powers that make the possibilities.
19. Language / C. Assigning Meanings / 4. Compositionality
Negative existentials with compositionality make the whole sentence meaningless [Read]
     Full Idea: A problem with compositionality is negative existential propositions. If some of the terms of the proposition are empty, and don't refer, then compositionality implies that the whole will lack meaning too.
     From: Stephen Read (Thinking About Logic [1995], Ch.5)
     A reaction: I don't agree. I don't see why compositionality implies holism about sentence-meaning. If I say 'that circular square is a psychopath', you understand the predication, despite being puzzled by the singular term.
19. Language / D. Propositions / 1. Propositions
A proposition objectifies what a sentence says, as indicative, with secure references [Read]
     Full Idea: A proposition makes an object out of what is said or expressed by the utterance of a certain sort of sentence, namely, one in the indicative mood which makes sense and doesn't fail in its references. It can then be an object of thought and belief.
     From: Stephen Read (Thinking About Logic [1995], Ch.1)
     A reaction: Nice, but two objections: I take it to be crucial to propositions that they eliminate ambiguities, and I take it that animals are capable of forming propositions. Read seems to regard them as fictions, but I take them to be brain events.