Combining Texts

All the ideas for 'Concerning the Author', 'Intermediate Logic' and 'What Numbers Could Not Be'

unexpand these ideas     |    start again     |     specify just one area for these texts


101 ideas

1. Philosophy / E. Nature of Metaphysics / 7. Against Metaphysics
The demonstrations of the metaphysicians are all moonshine [Peirce]
     Full Idea: The demonstrations of the metaphysicians are all moonshine.
     From: Charles Sanders Peirce (Concerning the Author [1897], p.2)
1. Philosophy / G. Scientific Philosophy / 3. Scientism
I am saturated with the spirit of physical science [Peirce]
     Full Idea: I am saturated, through and through, with the spirit of the physical sciences.
     From: Charles Sanders Peirce (Concerning the Author [1897], p.1)
4. Formal Logic / A. Syllogistic Logic / 2. Syllogistic Logic
Venn Diagrams map three predicates into eight compartments, then look for the conclusion [Bostock]
     Full Idea: Venn Diagrams are a traditional method to test validity of syllogisms. There are three interlocking circles, one for each predicate, thus dividing the universe into eight possible basic elementary quantifications. Is the conclusion in a compartment?
     From: David Bostock (Intermediate Logic [1997], 3.8)
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / b. Terminology of PL
'Disjunctive Normal Form' is ensuring that no conjunction has a disjunction within its scope [Bostock]
     Full Idea: 'Disjunctive Normal Form' (DNF) is rearranging the occurrences of ∧ and ∨ so that no conjunction sign has any disjunction in its scope. This is achieved by applying two of the distribution laws.
     From: David Bostock (Intermediate Logic [1997], 2.6)
'Conjunctive Normal Form' is ensuring that no disjunction has a conjunction within its scope [Bostock]
     Full Idea: 'Conjunctive Normal Form' (CNF) is rearranging the occurrences of ∧ and ∨ so that no disjunction sign has any conjunction in its scope. This is achieved by applying two of the distribution laws.
     From: David Bostock (Intermediate Logic [1997], 2.6)
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / d. Basic theorems of PL
'Disjunction' says that Γ,φ∨ψ|= iff Γ,φ|= and Γ,ψ|= [Bostock]
     Full Idea: The Principle of Disjunction says that Γ,φ∨ψ |= iff Γ,φ |= and Γ,ψ |=.
     From: David Bostock (Intermediate Logic [1997], 2.5.G)
     A reaction: That is, a disjunction leads to a contradiction if they each separately lead to contradictions.
'Assumptions' says that a formula entails itself (φ|=φ) [Bostock]
     Full Idea: The Principle of Assumptions says that any formula entails itself, i.e. φ |= φ. The principle depends just upon the fact that no interpretation assigns both T and F to the same formula.
     From: David Bostock (Intermediate Logic [1997], 2.5.A)
     A reaction: Thus one can introduce φ |= φ into any proof, and then use it to build more complex sequents needed to attain a particular target formula. Bostock's principle is more general than anything in Lemmon.
'Thinning' allows that if premisses entail a conclusion, then adding further premisses makes no difference [Bostock]
     Full Idea: The Principle of Thinning says that if a set of premisses entails a conclusion, then adding further premisses will still entail the conclusion. It is 'thinning' because it makes a weaker claim. If γ|=φ then γ,ψ|= φ.
     From: David Bostock (Intermediate Logic [1997], 2.5.B)
     A reaction: It is also called 'premise-packing'. It is the characteristic of a 'monotonic' logic - where once something is proved, it stays proved, whatever else is introduced.
The 'conditional' is that Γ|=φ→ψ iff Γ,φ|=ψ [Bostock]
     Full Idea: The Conditional Principle says that Γ |= φ→ψ iff Γ,φ |= ψ. With the addition of negation, this implies φ,φ→ψ |= ψ, which is 'modus ponens'.
     From: David Bostock (Intermediate Logic [1997], 2.5.H)
     A reaction: [Second half is in Ex. 2.5.4]
'Cutting' allows that if x is proved, and adding y then proves z, you can go straight to z [Bostock]
     Full Idea: The Principle of Cutting is the general point that entailment is transitive, extending this to cover entailments with more than one premiss. Thus if γ |= φ and φ,Δ |= ψ then γ,Δ |= ψ. Here φ has been 'cut out'.
     From: David Bostock (Intermediate Logic [1997], 2.5.C)
     A reaction: It might be called the Principle of Shortcutting, since you can get straight to the last conclusion, eliminating the intermediate step.
'Negation' says that Γ,¬φ|= iff Γ|=φ [Bostock]
     Full Idea: The Principle of Negation says that Γ,¬φ |= iff Γ |= φ. We also say that φ,¬φ |=, and hence by 'thinning on the right' that φ,¬φ |= ψ, which is 'ex falso quodlibet'.
     From: David Bostock (Intermediate Logic [1997], 2.5.E)
     A reaction: That is, roughly, if the formula gives consistency, the negation gives contradiction. 'Ex falso' says that anything will follow from a contradiction.
'Conjunction' says that Γ|=φ∧ψ iff Γ|=φ and Γ|=ψ [Bostock]
     Full Idea: The Principle of Conjunction says that Γ |= φ∧ψ iff Γ |= φ and Γ |= ψ. This implies φ,ψ |= φ∧ψ, which is ∧-introduction. It is also implies ∧-elimination.
     From: David Bostock (Intermediate Logic [1997], 2.5.F)
     A reaction: [Second half is Ex. 2.5.3] That is, if they are entailed separately, they are entailed as a unit. It is a moot point whether these principles are theorems of propositional logic, or derivation rules.
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / e. Axioms of PL
A logic with ¬ and → needs three axiom-schemas and one rule as foundation [Bostock]
     Full Idea: For ¬,→ Schemas: (A1) |-φ→(ψ→φ), (A2) |-(φ→(ψ→ξ)) → ((φ→ψ)→(φ→ξ)), (A3) |-(¬φ→¬ψ) → (ψ→φ), Rule:DET:|-φ,|-φ→ψ then |-ψ
     From: David Bostock (Intermediate Logic [1997], 5.2)
     A reaction: A1 says everything implies a truth, A2 is conditional proof, and A3 is contraposition. DET is modus ponens. This is Bostock's compact near-minimal axiom system for proposition logic. He adds two axioms and another rule for predicate logic.
4. Formal Logic / E. Nonclassical Logics / 6. Free Logic
A 'free' logic can have empty names, and a 'universally free' logic can have empty domains [Bostock]
     Full Idea: A 'free' logic is one in which names are permitted to be empty. A 'universally free' logic is one in which the domain of an interpretation may also be empty.
     From: David Bostock (Intermediate Logic [1997], 8.6)
5. Theory of Logic / A. Overview of Logic / 6. Classical Logic
Truth is the basic notion in classical logic [Bostock]
     Full Idea: The most fundamental notion in classical logic is that of truth.
     From: David Bostock (Intermediate Logic [1997], 1.1)
     A reaction: The opening sentence of his book. Hence the first half of the book is about semantics, and only the second half deals with proof. Compare Idea 10282. The thought seems to be that you could leave out truth, but that makes logic pointless.
Elementary logic cannot distinguish clearly between the finite and the infinite [Bostock]
     Full Idea: In very general terms, we cannot express the distinction between what is finite and what is infinite without moving essentially beyond the resources available in elementary logic.
     From: David Bostock (Intermediate Logic [1997], 4.8)
     A reaction: This observation concludes a discussion of Compactness in logic.
Fictional characters wreck elementary logic, as they have contradictions and no excluded middle [Bostock]
     Full Idea: Discourse about fictional characters leads to a breakdown of elementary logic. We accept P or ¬P if the relevant story says so, but P∨¬P will not be true if the relevant story says nothing either way, and P∧¬P is true if the story is inconsistent.
     From: David Bostock (Intermediate Logic [1997], 8.5)
     A reaction: I really like this. Does one need to invent a completely new logic for fictional characters? Or must their logic be intuitionist, or paraconsistent, or both?
5. Theory of Logic / B. Logical Consequence / 3. Deductive Consequence |-
The syntactic turnstile |- φ means 'there is a proof of φ' or 'φ is a theorem' [Bostock]
     Full Idea: The syntactic turnstile |- φ means 'There is a proof of φ' (in the system currently being considered). Another way of saying the same thing is 'φ is a theorem'.
     From: David Bostock (Intermediate Logic [1997], 5.1)
5. Theory of Logic / B. Logical Consequence / 4. Semantic Consequence |=
Validity is a conclusion following for premises, even if there is no proof [Bostock]
     Full Idea: The classical definition of validity counts an argument as valid if and only if the conclusion does in fact follow from the premises, whether or not the argument contains any demonstration of this fact.
     From: David Bostock (Intermediate Logic [1997], 1.2)
     A reaction: Hence validity is given by |= rather than by |-. A common example is 'it is red so it is coloured', which seems true but beyond proof. In the absence of formal proof, you wonder whether validity is merely a psychological notion.
It seems more natural to express |= as 'therefore', rather than 'entails' [Bostock]
     Full Idea: In practice we avoid quotation marks and explicitly set-theoretic notation that explaining |= as 'entails' appears to demand. Hence it seems more natural to explain |= as simply representing the word 'therefore'.
     From: David Bostock (Intermediate Logic [1997], 1.3)
     A reaction: Not sure I quite understand that, but I have trained myself to say 'therefore' for the generic use of |=. In other consequences it seems better to read it as 'semantic consequence', to distinguish it from |-.
Γ|=φ is 'entails'; Γ|= is 'is inconsistent'; |=φ is 'valid' [Bostock]
     Full Idea: If we write Γ |= φ, with one formula to the right, then the turnstile abbreviates 'entails'. For a sequent of the form Γ |= it can be read as 'is inconsistent'. For |= φ we read it as 'valid'.
     From: David Bostock (Intermediate Logic [1997], 1.3)
5. Theory of Logic / B. Logical Consequence / 5. Modus Ponens
MPP: 'If Γ|=φ and Γ|=φ→ψ then Γ|=ψ' (omit Γs for Detachment) [Bostock]
     Full Idea: The Rule of Detachment is a version of Modus Ponens, and says 'If |=φ and |=φ→ψ then |=ψ'. This has no assumptions. Modus Ponens is the more general rule that 'If Γ|=φ and Γ|=φ→ψ then Γ|=ψ'.
     From: David Bostock (Intermediate Logic [1997], 5.3)
     A reaction: Modus Ponens is actually designed for use in proof based on assumptions (which isn't always the case). In Detachment the formulae are just valid, without dependence on assumptions to support them.
MPP is a converse of Deduction: If Γ |- φ→ψ then Γ,φ|-ψ [Bostock]
     Full Idea: Modus Ponens is equivalent to the converse of the Deduction Theorem, namely 'If Γ |- φ→ψ then Γ,φ|-ψ'.
     From: David Bostock (Intermediate Logic [1997], 5.3)
     A reaction: See 13615 for details of the Deduction Theorem. See 13614 for Modus Ponens.
5. Theory of Logic / D. Assumptions for Logic / 4. Identity in Logic
The sign '=' is a two-place predicate expressing that 'a is the same thing as b' (a=b) [Bostock]
     Full Idea: We shall use 'a=b' as short for 'a is the same thing as b'. The sign '=' thus expresses a particular two-place predicate. Officially we will use 'I' as the identity predicate, so that 'Iab' is as formula, but we normally 'abbreviate' this to 'a=b'.
     From: David Bostock (Intermediate Logic [1997], 8.1)
|= α=α and α=β |= φ(α/ξ ↔ φ(β/ξ) fix identity [Bostock]
     Full Idea: We usually take these two principles together as the basic principles of identity: |= α=α and α=β |= φ(α/ξ) ↔ φ(β/ξ). The second (with scant regard for history) is known as Leibniz's Law.
     From: David Bostock (Intermediate Logic [1997], 8.1)
If we are to express that there at least two things, we need identity [Bostock]
     Full Idea: To say that there is at least one thing x such that Fx we need only use an existential quantifier, but to say that there are at least two things we need identity as well.
     From: David Bostock (Intermediate Logic [1997], 8.1)
     A reaction: The only clear account I've found of why logic may need to be 'with identity'. Without it, you can only reason about one thing or all things. Presumably plural quantification no longer requires '='?
5. Theory of Logic / E. Structures of Logic / 2. Logical Connectives / a. Logical connectives
Truth-functors are usually held to be defined by their truth-tables [Bostock]
     Full Idea: The usual view of the meaning of truth-functors is that each is defined by its own truth-table, independently of any other truth-functor.
     From: David Bostock (Intermediate Logic [1997], 2.7)
5. Theory of Logic / E. Structures of Logic / 5. Functions in Logic
A 'zero-place' function just has a single value, so it is a name [Bostock]
     Full Idea: We can talk of a 'zero-place' function, which is a new-fangled name for a familiar item; it just has a single value, and so it has the same role as a name.
     From: David Bostock (Intermediate Logic [1997], 8.2)
A 'total' function ranges over the whole domain, a 'partial' function over appropriate inputs [Bostock]
     Full Idea: Usually we allow that a function is defined for arguments of a suitable kind (a 'partial' function), but we can say that each function has one value for any object whatever, from the whole domain that our quantifiers range over (a 'total' function).
     From: David Bostock (Intermediate Logic [1997], 8.2)
     A reaction: He points out (p.338) that 'the father of..' is a functional expression, but it wouldn't normally take stones as input, so seems to be a partial function. But then it doesn't even take all male humans either. It only takes fathers!
5. Theory of Logic / F. Referring in Logic / 1. Naming / a. Names
In logic, a name is just any expression which refers to a particular single object [Bostock]
     Full Idea: The important thing about a name, for logical purposes, is that it is used to make a singular reference to a particular object; ..we say that any expression too may be counted as a name, for our purposes, it it too performs the same job.
     From: David Bostock (Intermediate Logic [1997], 3.1)
     A reaction: He cites definite descriptions as the most notoriously difficult case, in deciding whether or not they function as names. I takes it as pretty obvious that sometimes they do and sometimes they don't (in ordinary usage).
5. Theory of Logic / F. Referring in Logic / 1. Naming / e. Empty names
An expression is only a name if it succeeds in referring to a real object [Bostock]
     Full Idea: An expression is not counted as a name unless it succeeds in referring to an object, i.e. unless there really is an object to which it refers.
     From: David Bostock (Intermediate Logic [1997], 3.1)
     A reaction: His 'i.e.' makes the existence condition sound sufficient, but in ordinary language you don't succeed in referring to 'that man over there' just because he exists. In modal contexts we presumably refer to hypothetical objects (pace Lewis).
5. Theory of Logic / F. Referring in Logic / 2. Descriptions / b. Definite descriptions
Definite descriptions don't always pick out one thing, as in denials of existence, or errors [Bostock]
     Full Idea: It is natural to suppose one only uses a definite description when one believes it describes only one thing, but exceptions are 'there is no such thing as the greatest prime number', or saying something false where the reference doesn't occur.
     From: David Bostock (Intermediate Logic [1997], 8.3)
Definite desciptions resemble names, but can't actually be names, if they don't always refer [Bostock]
     Full Idea: Although a definite description looks like a complex name, and in many ways behaves like a name, still it cannot be a name if names must always refer to objects. Russell gave the first proposal for handling such expressions.
     From: David Bostock (Intermediate Logic [1997], 8.3)
     A reaction: I take the simple solution to be a pragmatic one, as roughly shown by Donnellan, that sometimes they are used exactly like names, and sometimes as something else. The same phrase can have both roles. Confusing for logicians. Tough.
Because of scope problems, definite descriptions are best treated as quantifiers [Bostock]
     Full Idea: Because of the scope problem, it now seems better to 'parse' definition descriptions not as names but as quantifiers. 'The' is to be treated in the same category as acknowledged quantifiers like 'all' and 'some'. We write Ix - 'for the x such that..'.
     From: David Bostock (Intermediate Logic [1997], 8.3)
     A reaction: This seems intuitively rather good, since quantification in normal speech is much more sophisticated than the crude quantification of classical logic. But the fact is that they often function as names (but see Idea 13817).
Definite descriptions are usually treated like names, and are just like them if they uniquely refer [Bostock]
     Full Idea: In practice, definite descriptions are for the most part treated as names, since this is by far the most convenient notation (even though they have scope). ..When a description is uniquely satisfied then it does behave like a name.
     From: David Bostock (Intermediate Logic [1997], 8.3)
     A reaction: Apparent names themselves have problems when they wander away from uniquely picking out one thing, as in 'John Doe'.
We are only obliged to treat definite descriptions as non-names if only the former have scope [Bostock]
     Full Idea: If it is really true that definite descriptions have scopes whereas names do not, then Russell must be right to claim that definite descriptions are not names. If, however, this is not true, then it does no harm to treat descriptions as complex names.
     From: David Bostock (Intermediate Logic [1997], 8.8)
5. Theory of Logic / F. Referring in Logic / 2. Descriptions / c. Theory of definite descriptions
Names do not have scope problems (e.g. in placing negation), but Russell's account does have that problem [Bostock]
     Full Idea: In orthodox logic names are not regarded as having scope (for example, in where a negation is placed), whereas on Russell's theory definite descriptions certainly do. Russell had his own way of dealing with this.
     From: David Bostock (Intermediate Logic [1997], 8.3)
5. Theory of Logic / G. Quantification / 1. Quantification
'Prenex normal form' is all quantifiers at the beginning, out of the scope of truth-functors [Bostock]
     Full Idea: A formula is said to be in 'prenex normal form' (PNF) iff all its quantifiers occur in a block at the beginning, so that no quantifier is in the scope of any truth-functor.
     From: David Bostock (Intermediate Logic [1997], 3.7)
     A reaction: Bostock provides six equivalences which can be applied to manouevre any formula into prenex normal form. He proves that every formula can be arranged in PNF.
5. Theory of Logic / G. Quantification / 2. Domain of Quantification
If we allow empty domains, we must allow empty names [Bostock]
     Full Idea: We can show that if empty domains are permitted, then empty names must be permitted too.
     From: David Bostock (Intermediate Logic [1997], 8.4)
5. Theory of Logic / H. Proof Systems / 1. Proof Systems
An 'informal proof' is in no particular system, and uses obvious steps and some ordinary English [Bostock]
     Full Idea: An 'informal proof' is not in any particular proof system. One may use any rule of proof that is 'sufficiently obvious', and there is quite a lot of ordinary English in the proof, explaining what is going on at each step.
     From: David Bostock (Intermediate Logic [1997], 8.1)
5. Theory of Logic / H. Proof Systems / 2. Axiomatic Proof
Quantification adds two axiom-schemas and a new rule [Bostock]
     Full Idea: New axiom-schemas for quantifiers: (A4) |-∀ξφ → φ(α/ξ), (A5) |-∀ξ(ψ→φ) → (ψ→∀ξφ), plus the rule GEN: If |-φ the |-∀ξφ(ξ/α).
     From: David Bostock (Intermediate Logic [1997], 5.6)
     A reaction: This follows on from Idea 13610, where he laid out his three axioms and one rule for propositional (truth-functional) logic. This Idea plus 13610 make Bostock's proposed axiomatisation of first-order logic.
Axiom systems from Frege, Russell, Church, Lukasiewicz, Tarski, Nicod, Kleene, Quine... [Bostock]
     Full Idea: Notably axiomatisations of first-order logic are by Frege (1879), Russell and Whitehead (1910), Church (1956), Lukasiewicz and Tarski (1930), Lukasiewicz (1936), Nicod (1917), Kleene (1952) and Quine (1951). Also Bostock (1997).
     From: David Bostock (Intermediate Logic [1997], 5.8)
     A reaction: My summary, from Bostock's appendix 5.8, which gives details of all of these nine systems. This nicely illustrates the status and nature of axiom systems, which have lost the absolute status they seemed to have in Euclid.
5. Theory of Logic / H. Proof Systems / 3. Proof from Assumptions
'Conditonalised' inferences point to the Deduction Theorem: If Γ,φ|-ψ then Γ|-φ→ψ [Bostock]
     Full Idea: If a group of formulae prove a conclusion, we can 'conditionalize' this into a chain of separate inferences, which leads to the Deduction Theorem (or Conditional Proof), that 'If Γ,φ|-ψ then Γ|-φ→ψ'.
     From: David Bostock (Intermediate Logic [1997], 5.3)
     A reaction: This is the rule CP (Conditional Proof) which can be found in the rules for propositional logic I transcribed from Lemmon's book.
The Deduction Theorem greatly simplifies the search for proof [Bostock]
     Full Idea: Use of the Deduction Theorem greatly simplifies the search for proof (or more strictly, the task of showing that there is a proof).
     From: David Bostock (Intermediate Logic [1997], 5.3)
     A reaction: See 13615 for details of the Deduction Theorem. Bostock is referring to axiomatic proof, where it can be quite hard to decide which axioms are relevant. The Deduction Theorem enables the making of assumptions.
Proof by Assumptions can always be reduced to Proof by Axioms, using the Deduction Theorem [Bostock]
     Full Idea: By repeated transformations using the Deduction Theorem, any proof from assumptions can be transformed into a fully conditionalized proof, which is then an axiomatic proof.
     From: David Bostock (Intermediate Logic [1997], 5.6)
     A reaction: Since proof using assumptions is perhaps the most standard proof system (e.g. used in Lemmon, for many years the standard book at Oxford University), the Deduction Theorem is crucial for giving it solid foundations.
The Deduction Theorem and Reductio can 'discharge' assumptions - they aren't needed for the new truth [Bostock]
     Full Idea: Like the Deduction Theorem, one form of Reductio ad Absurdum (If Γ,φ|-[absurdity] then Γ|-¬φ) 'discharges' an assumption. Assume φ and obtain a contradiction, then we know ¬&phi, without assuming φ.
     From: David Bostock (Intermediate Logic [1997], 5.7)
     A reaction: Thus proofs from assumption either arrive at conditional truths, or at truths that are true irrespective of what was initially assumed.
5. Theory of Logic / H. Proof Systems / 4. Natural Deduction
Natural deduction takes proof from assumptions (with its rules) as basic, and axioms play no part [Bostock]
     Full Idea: Natural deduction takes the notion of proof from assumptions as a basic notion, ...so it will use rules for use in proofs from assumptions, and axioms (as traditionally understood) will have no role to play.
     From: David Bostock (Intermediate Logic [1997], 6.1)
     A reaction: The main rules are those for introduction and elimination of truth functors.
Excluded middle is an introduction rule for negation, and ex falso quodlibet will eliminate it [Bostock]
     Full Idea: Many books take RAA (reductio) and DNE (double neg) as the natural deduction introduction- and elimination-rules for negation, but RAA is not a natural introduction rule. I prefer TND (tertium) and EFQ (ex falso) for ¬-introduction and -elimination.
     From: David Bostock (Intermediate Logic [1997], 6.2)
In natural deduction we work from the premisses and the conclusion, hoping to meet in the middle [Bostock]
     Full Idea: When looking for a proof of a sequent, the best we can do in natural deduction is to work simultaneously in both directions, forward from the premisses, and back from the conclusion, and hope they will meet in the middle.
     From: David Bostock (Intermediate Logic [1997], 6.5)
Natural deduction rules for → are the Deduction Theorem (→I) and Modus Ponens (→E) [Bostock]
     Full Idea: Natural deduction adopts for → as rules the Deduction Theorem and Modus Ponens, here called →I and →E. If ψ follows φ in the proof, we can write φ→ψ (→I). φ and φ→ψ permit ψ (→E).
     From: David Bostock (Intermediate Logic [1997], 6.2)
     A reaction: Natural deduction has this neat and appealing way of formally introducing or eliminating each connective, so that you know where you are, and you know what each one means.
5. Theory of Logic / H. Proof Systems / 5. Tableau Proof
Tableau proofs use reduction - seeking an impossible consequence from an assumption [Bostock]
     Full Idea: A tableau proof is a proof by reduction ad absurdum. One begins with an assumption, and one develops the consequences of that assumption, seeking to derive an impossible consequence.
     From: David Bostock (Intermediate Logic [1997], 4.1)
A completed open branch gives an interpretation which verifies those formulae [Bostock]
     Full Idea: An open branch in a completed tableau will always yield an interpretation that verifies every formula on the branch.
     From: David Bostock (Intermediate Logic [1997], 4.7)
     A reaction: In other words the open branch shows a model which seems to work (on the available information). Similarly a closed branch gives a model which won't work - a counterexample.
Non-branching rules add lines, and branching rules need a split; a branch with a contradiction is 'closed' [Bostock]
     Full Idea: Rules for semantic tableaus are of two kinds - non-branching rules and branching rules. The first allow the addition of further lines, and the second requires splitting the branch. A branch which assigns contradictory values to a formula is 'closed'.
     From: David Bostock (Intermediate Logic [1997], 4.1)
     A reaction: [compressed] Thus 'and' stays on one branch, asserting both formulae, but 'or' splits, checking first one and then the other. A proof succeeds when all the branches are closed, showing that the initial assumption leads only to contradictions.
In a tableau proof no sequence is established until the final branch is closed; hypotheses are explored [Bostock]
     Full Idea: In a tableau system no sequent is established until the final step of the proof, when the last branch closes, and until then we are simply exploring a hypothesis.
     From: David Bostock (Intermediate Logic [1997], 7.3)
     A reaction: This compares sharply with a sequence calculus, where every single step is a conclusive proof of something. So use tableaux for exploring proofs, and then sequence calculi for writing them up?
A tree proof becomes too broad if its only rule is Modus Ponens [Bostock]
     Full Idea: When the only rule of inference is Modus Ponens, the branches of a tree proof soon spread too wide for comfort.
     From: David Bostock (Intermediate Logic [1997], 6.4)
Tableau rules are all elimination rules, gradually shortening formulae [Bostock]
     Full Idea: In their original setting, all the tableau rules are elimination rules, allowing us to replace a longer formula by its shorter components.
     From: David Bostock (Intermediate Logic [1997], 7.3)
Unlike natural deduction, semantic tableaux have recipes for proving things [Bostock]
     Full Idea: With semantic tableaux there are recipes for proof-construction that we can operate, whereas with natural deduction there are not.
     From: David Bostock (Intermediate Logic [1997], 6.5)
5. Theory of Logic / H. Proof Systems / 6. Sequent Calculi
A sequent calculus is good for comparing proof systems [Bostock]
     Full Idea: A sequent calculus is a useful tool for comparing two systems that at first look utterly different (such as natural deduction and semantic tableaux).
     From: David Bostock (Intermediate Logic [1997], 7.2)
Each line of a sequent calculus is a conclusion of previous lines, each one explicitly recorded [Bostock]
     Full Idea: A sequent calculus keeps an explicit record of just what sequent is established at each point in a proof. Every line is itself the sequent proved at that point. It is not a linear sequence or array of formulae, but a matching array of whole sequents.
     From: David Bostock (Intermediate Logic [1997], 7.1)
5. Theory of Logic / I. Semantics of Logic / 1. Semantics of Logic
Interpretation by assigning objects to names, or assigning them to variables first [Bostock, by PG]
     Full Idea: There are two approaches to an 'interpretation' of a logic: the first method assigns objects to names, and then defines connectives and quantifiers, focusing on truth; the second assigns objects to variables, then variables to names, using satisfaction.
     From: report of David Bostock (Intermediate Logic [1997], 3.4) by PG - Db (lexicon)
     A reaction: [a summary of nine elusive pages in Bostock] He says he prefers the first method, but the second method is more popular because it handles open formulas, by treating free variables as if they were names.
5. Theory of Logic / I. Semantics of Logic / 5. Extensionalism
Extensionality is built into ordinary logic semantics; names have objects, predicates have sets of objects [Bostock]
     Full Idea: Extensionality is built into the semantics of ordinary logic. When a name-letter is interpreted as denoting something, we just provide the object denoted. All that we provide for a one-place predicate-letter is the set of objects that it is true of..
     From: David Bostock (Intermediate Logic [1997])
     A reaction: Could we keep the syntax of ordinary logic, and provide a wildly different semantics, much closer to real life? We could give up these dreadful 'objects' that Frege lumbered us with. Logic for processes, etc.
If an object has two names, truth is undisturbed if the names are swapped; this is Extensionality [Bostock]
     Full Idea: If two names refer to the same object, then in any proposition which contains either of them the other may be substituted in its place, and the truth-value of the proposition of the proposition will be unaltered. This is the Principle of Extensionality.
     From: David Bostock (Intermediate Logic [1997], 3.1)
     A reaction: He acknowledges that ordinary language is full of counterexamples, such as 'he doesn't know the Morning Star and the Evening Star are the same body' (when he presumably knows that the Morning Star is the Morning Star). This is logic. Like maths.
5. Theory of Logic / K. Features of Logics / 2. Consistency
For 'negation-consistent', there is never |-(S)φ and |-(S)¬φ [Bostock]
     Full Idea: Any system of proof S is said to be 'negation-consistent' iff there is no formula such that |-(S)φ and |-(S)¬φ.
     From: David Bostock (Intermediate Logic [1997], 4.5)
     A reaction: Compare Idea 13542. This version seems to be a 'strong' version, as it demands a higher standard than 'absolute consistency'. Both halves of the condition would have to be established.
A proof-system is 'absolutely consistent' iff we don't have |-(S)φ for every formula [Bostock]
     Full Idea: Any system of proof S is said to be 'absolutely consistent' iff it is not the case that for every formula we have |-(S)φ.
     From: David Bostock (Intermediate Logic [1997], 4.5)
     A reaction: Bostock notes that a sound system will be both 'negation-consistent' (Idea 13541) and absolutely consistent. 'Tonk' systems can be shown to be unsound because the two come apart.
A set of formulae is 'inconsistent' when there is no interpretation which can make them all true [Bostock]
     Full Idea: 'Γ |=' means 'Γ is a set of closed formulae, and there is no (standard) interpretation in which all of the formulae in Γ are true'. We abbreviate this last to 'Γ is inconsistent'.
     From: David Bostock (Intermediate Logic [1997], 4.5)
     A reaction: This is a semantic approach to inconsistency, in terms of truth, as opposed to saying that we cannot prove both p and ¬p. I take this to be closer to the true concept, since you need never have heard of 'proof' to understand 'inconsistent'.
5. Theory of Logic / K. Features of Logics / 6. Compactness
Inconsistency or entailment just from functors and quantifiers is finitely based, if compact [Bostock]
     Full Idea: Being 'compact' means that if we have an inconsistency or an entailment which holds just because of the truth-functors and quantifiers involved, then it is always due to a finite number of the propositions in question.
     From: David Bostock (Intermediate Logic [1997], 4.8)
     A reaction: Bostock says this is surprising, given the examples 'a is not a parent of a parent of b...' etc, where an infinity seems to establish 'a is not an ancestor of b'. The point, though, is that this truth doesn't just depend on truth-functors and quantifiers.
Compactness means an infinity of sequents on the left will add nothing new [Bostock]
     Full Idea: The logic of truth-functions is compact, which means that sequents with infinitely many formulae on the left introduce nothing new. Hence we can confine our attention to finite sequents.
     From: David Bostock (Intermediate Logic [1997], 5.5)
     A reaction: This makes it clear why compactness is a limitation in logic. If you want the logic to be unlimited in scope, it isn't; it only proves things from finite numbers of sequents. This makes it easier to prove completeness for the system.
6. Mathematics / A. Nature of Mathematics / 3. Nature of Numbers / a. Numbers
There are no such things as numbers [Benacerraf]
     Full Idea: There are no such things as numbers.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], IIIC)
     A reaction: Mill said precisely the same (Idea 9794). I think I agree. There has been a classic error of reification. An abstract pattern is not an object. If I coin a word for all the three-digit numbers in our system, I haven't created a new 'object'.
Numbers can't be sets if there is no agreement on which sets they are [Benacerraf]
     Full Idea: The fact that Zermelo and Von Neumann disagree on which particular sets the numbers are is fatal to the view that each number is some particular set.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], II)
     A reaction: I agree. A brilliantly simple argument. There is the possibility that one of the two accounts is correct (I would vote for Zermelo), but it is not actually possible to prove it.
6. Mathematics / A. Nature of Mathematics / 3. Nature of Numbers / c. Priority of numbers
Benacerraf says numbers are defined by their natural ordering [Benacerraf, by Fine,K]
     Full Idea: Benacerraf thinks of numbers as being defined by their natural ordering.
     From: report of Paul Benacerraf (What Numbers Could Not Be [1965]) by Kit Fine - Cantorian Abstraction: Recon. and Defence §5
     A reaction: My intuition is that cardinality is logically prior to ordinality, since that connects better with the experienced physical world of objects. Just as the fact that people have different heights must precede them being arranged in height order.
6. Mathematics / A. Nature of Mathematics / 3. Nature of Numbers / f. Cardinal numbers
To understand finite cardinals, it is necessary and sufficient to understand progressions [Benacerraf, by Wright,C]
     Full Idea: Benacerraf claims that the concept of a progression is in some way the fundamental arithmetical notion, essential to understanding the idea of a finite cardinal, with a grasp of progressions sufficing for grasping finite cardinals.
     From: report of Paul Benacerraf (What Numbers Could Not Be [1965]) by Crispin Wright - Frege's Concept of Numbers as Objects 3.xv
     A reaction: He cites Dedekind (and hence the Peano Axioms) as the source of this. The interest is that progression seems to be fundamental to ordianls, but this claims it is also fundamental to cardinals. Note that in the first instance they are finite.
A set has k members if it one-one corresponds with the numbers less than or equal to k [Benacerraf]
     Full Idea: Any set has k members if and only if it can be put into one-to-one correspondence with the set of numbers less than or equal to k.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], I)
     A reaction: This is 'Ernie's' view of things in the paper. This defines the finite cardinal numbers in terms of the finite ordinal numbers. He has already said that the set of numbers is well-ordered.
To explain numbers you must also explain cardinality, the counting of things [Benacerraf]
     Full Idea: I would disagree with Quine. The explanation of cardinality - i.e. of the use of numbers for 'transitive counting', as I have called it - is part and parcel of the explication of number.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], I n2)
     A reaction: Quine says numbers are just a progression, with transitive counting as a bonus. Interesting that Benacerraf identifies cardinality with transitive counting. I would have thought it was the possession of numerical quantity, not ascertaining it.
6. Mathematics / A. Nature of Mathematics / 4. Using Numbers / c. Counting procedure
We can count intransitively (reciting numbers) without understanding transitive counting of items [Benacerraf]
     Full Idea: Learning number words in the right order is counting 'intransitively'; using them as measures of sets is counting 'transitively'. ..It seems possible for someone to learn the former without learning the latter.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], I)
     A reaction: Scruton's nice question (Idea 3907) is whether you could be said to understand numbers if you could only count intransitively. I would have thought such a state contained no understanding at all of numbers. Benacerraf agrees.
Someone can recite numbers but not know how to count things; but not vice versa [Benacerraf]
     Full Idea: It seems that it is possible for someone to learn to count intransitively without learning to count transitively. But not vice versa.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], I)
     A reaction: Benacerraf favours the priority of the ordinals. It is doubtful whether you have grasped cardinality properly if you don't know how to count things. Could I understand 'he has 27 sheep', without understanding the system of natural numbers?
6. Mathematics / A. Nature of Mathematics / 4. Using Numbers / g. Applying mathematics
The application of a system of numbers is counting and measurement [Benacerraf]
     Full Idea: The application of a system of numbers is counting and measurement.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], I)
     A reaction: A simple point, but it needs spelling out. Counting seems prior, in experience if not in logic. Measuring is a luxury you find you can indulge in (by imagining your quantity) split into parts, once you have mastered counting.
6. Mathematics / B. Foundations for Mathematics / 4. Axioms for Number / a. Axioms for numbers
For Zermelo 3 belongs to 17, but for Von Neumann it does not [Benacerraf]
     Full Idea: Ernie's number progression is [φ],[φ,[φ]],[φ,[φ],[φ,[φ,[φ]]],..., whereas Johnny's is [φ],[[φ]],[[[φ]]],... For Ernie 3 belongs to 17, not for Johnny. For Ernie 17 has 17 members; for Johnny it has one.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], II)
     A reaction: Benacerraf's point is that there is no proof-theoretic way to choose between them, though I am willing to offer my intuition that Ernie (Zermelo) gives the right account. Seventeen pebbles 'contains' three pebbles; you must pass 3 to count to 17.
The successor of x is either x and all its members, or just the unit set of x [Benacerraf]
     Full Idea: For Ernie, the successor of a number x was the set consisting of x and all the members of x, while for Johnny the successor of x was simply [x], the unit set of x - the set whose only member is x.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], II)
     A reaction: See also Idea 9900. Benacerraf's famous point is that it doesn't seem to make any difference to arithmetic which version of set theory you choose as its basis. I take this to conclusively refute the idea that numbers ARE sets.
6. Mathematics / B. Foundations for Mathematics / 4. Axioms for Number / f. Mathematical induction
Ordinary or mathematical induction assumes for the first, then always for the next, and hence for all [Bostock]
     Full Idea: The principle of mathematical (or ordinary) induction says suppose the first number, 0, has a property; suppose that if any number has that property, then so does the next; then it follows that all numbers have the property.
     From: David Bostock (Intermediate Logic [1997], 2.8)
     A reaction: Ordinary induction is also known as 'weak' induction. Compare Idea 13359 for 'strong' or complete induction. The number sequence must have a first element, so this doesn't work for the integers.
Complete induction assumes for all numbers less than n, then also for n, and hence for all numbers [Bostock]
     Full Idea: The principle of complete induction says suppose that for every number, if all the numbers less than it have a property, then so does it; it then follows that every number has the property.
     From: David Bostock (Intermediate Logic [1997], 2.8)
     A reaction: Complete induction is also known as 'strong' induction. Compare Idea 13358 for 'weak' or mathematical induction. The number sequence need have no first element.
6. Mathematics / B. Foundations for Mathematics / 6. Mathematics as Set Theory / b. Mathematics is not set theory
Disputes about mathematical objects seem irrelevant, and mathematicians cannot resolve them [Benacerraf, by Friend]
     Full Idea: If two children were brought up knowing two different set theories, they could entirely agree on how to do arithmetic, up to the point where they discuss ontology. There is no mathematical way to tell which is the true representation of numbers.
     From: report of Paul Benacerraf (What Numbers Could Not Be [1965]) by Michèle Friend - Introducing the Philosophy of Mathematics
     A reaction: Benacerraf ends by proposing a structuralist approach. If mathematics is consistent with conflicting set theories, then those theories are not shedding light on mathematics.
No particular pair of sets can tell us what 'two' is, just by one-to-one correlation [Benacerraf, by Lowe]
     Full Idea: Hume's Principle can't tell us what a cardinal number is (this is one lesson of Benacerraf's well-known problem). An infinity of pairs of sets could actually be the number two (not just the simplest sets).
     From: report of Paul Benacerraf (What Numbers Could Not Be [1965]) by E.J. Lowe - The Possibility of Metaphysics 10.3
     A reaction: The drift here is for numbers to end up as being basic, axiomatic, indefinable, universal entities. Since I favour patterns as the basis of numbers, I think the basis might be in a pre-verbal experience, which even a bird might have, viewing its eggs.
If ordinal numbers are 'reducible to' some set-theory, then which is which? [Benacerraf]
     Full Idea: If a particular set-theory is in a strong sense 'reducible to' the theory of ordinal numbers... then we can still ask, but which is really which?
     From: Paul Benacerraf (What Numbers Could Not Be [1965], IIIB)
     A reaction: A nice question about all reductions. If we reduce mind to brain, does that mean that brain is really just mind. To have a direction (up/down?), reduction must lead to explanation in a single direction only. Do numbers explain sets?
6. Mathematics / B. Foundations for Mathematics / 7. Mathematical Structuralism / a. Structuralism
If any recursive sequence will explain ordinals, then it seems to be the structure which matters [Benacerraf]
     Full Idea: If any recursive sequence whatever would do to explain ordinal numbers suggests that what is important is not the individuality of each element, but the structure which they jointly exhibit.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], IIIC)
     A reaction: This sentence launched the whole modern theory of Structuralism in mathematics. It is hard to see what properties a number-as-object could have which would entail its place in an ordinal sequence.
The job is done by the whole system of numbers, so numbers are not objects [Benacerraf]
     Full Idea: 'Objects' do not do the job of numbers singly; the whole system performs the job or nothing does. I therefore argue that numbers could not be objects at all.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], IIIC)
     A reaction: This thought is explored by structuralism - though it is a moot point where mere 'nodes' in a system (perhaps filled with old bits of furniture) will do the job either. No one ever explains the 'power' of numbers (felt when you do a sudoku). Causal?
The number 3 defines the role of being third in a progression [Benacerraf]
     Full Idea: Any object can play the role of 3; that is, any object can be the third element in some progression. What is peculiar to 3 is that it defines that role, not by being a paradigm, but by representing the relation of any third member of a progression.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], IIIC)
     A reaction: An interesting early attempt to spell out the structuralist idea. I'm thinking that the role is spelled out by the intersection of patterns which involve threes.
Number words no more have referents than do the parts of a ruler [Benacerraf]
     Full Idea: Questions of the identification of the referents of number words should be dismissed as misguided in just the way that a question about the referents of the parts of a ruler would be seen as misguided.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], IIIC)
     A reaction: What a very nice simple point. It would be very strange to insist that every single part of the continuum of a ruler should be regarded as an 'object'.
Mathematical objects only have properties relating them to other 'elements' of the same structure [Benacerraf]
     Full Idea: Mathematical objects have no properties other than those relating them to other 'elements' of the same structure.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], p.285), quoted by Fraser MacBride - Structuralism Reconsidered §3 n13
     A reaction: Suppose we only had one number - 13 - and we all cried with joy when we recognised it in a group of objects. Would that be a number, or just a pattern, or something hovering between the two?
How can numbers be objects if order is their only property? [Benacerraf, by Putnam]
     Full Idea: Benacerraf raises the question how numbers can be 'objects' if they have no properties except order in a particular ω-sequence.
     From: report of Paul Benacerraf (What Numbers Could Not Be [1965], p.301) by Hilary Putnam - Mathematics without Foundations
     A reaction: Frege certainly didn't think that order was their only property (see his 'borehole' metaphor in Grundlagen). It might be better to say that they are objects which only have relational properties.
6. Mathematics / C. Sources of Mathematics / 1. Mathematical Platonism / b. Against mathematical platonism
Number-as-objects works wholesale, but fails utterly object by object [Benacerraf]
     Full Idea: The identification of numbers with objects works wholesale but fails utterly object by object.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], IIIC)
     A reaction: This seems to be a glaring problem for platonists. You can stare at 1728 till you are blue in the face, but it only begins to have any properties at all once you examine its place in the system. This is unusual behaviour for an object.
6. Mathematics / C. Sources of Mathematics / 5. Numbers as Adjectival
Number words are not predicates, as they function very differently from adjectives [Benacerraf]
     Full Idea: The unpredicative nature of number words can be seen by noting how different they are from, say, ordinary adjectives, which do function as predicates.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], II)
     A reaction: He points out that 'x is seventeen' is a rare construction in English, unlike 'x is happy/green/interesting', and that numbers outrank all other adjectives (having to appear first in any string of them).
6. Mathematics / C. Sources of Mathematics / 6. Logicism / d. Logicism critique
The set-theory paradoxes mean that 17 can't be the class of all classes with 17 members [Benacerraf]
     Full Idea: In no consistent theory is there a class of all classes with seventeen members. The existence of the paradoxes is a good reason to deny to 'seventeen' this univocal role of designating the class of all classes with seventeen members.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], II)
     A reaction: This was Frege's disaster, and seems to block any attempt to achieve logicism by translating numbers into sets. It now seems unclear whether set theory is logic, or mathematics, or sui generis.
8. Modes of Existence / A. Relations / 4. Formal Relations / a. Types of relation
Relations can be one-many (at most one on the left) or many-one (at most one on the right) [Bostock]
     Full Idea: A relation is 'one-many' if for anything on the right there is at most one on the left (∀xyz(Rxz∧Ryz→x=y), and is 'many-one' if for anything on the left there is at most one on the right (∀xyz(Rzx∧Rzy→x=y).
     From: David Bostock (Intermediate Logic [1997], 8.1)
A relation is not reflexive, just because it is transitive and symmetrical [Bostock]
     Full Idea: It is easy to fall into the error of supposing that a relation which is both transitive and symmetrical must also be reflexive.
     From: David Bostock (Intermediate Logic [1997], 4.7)
     A reaction: Compare Idea 14430! Transivity will take you there, and symmetricality will get you back, but that doesn't entitle you to take the shortcut?
9. Objects / F. Identity among Objects / 5. Self-Identity
If non-existent things are self-identical, they are just one thing - so call it the 'null object' [Bostock]
     Full Idea: If even non-existent things are still counted as self-identical, then all non-existent things must be counted as identical with one another, so there is at most one non-existent thing. We might arbitrarily choose zero, or invent 'the null object'.
     From: David Bostock (Intermediate Logic [1997], 8.6)
9. Objects / F. Identity among Objects / 6. Identity between Objects
Identity statements make sense only if there are possible individuating conditions [Benacerraf]
     Full Idea: Identity statements make sense only in contexts where there exist possible individuating conditions.
     From: Paul Benacerraf (What Numbers Could Not Be [1965], III)
     A reaction: He is objecting to bizarre identifications involving numbers. An identity statement may be bizarre even if we can clearly individuate the two candidates. Winston Churchill is a Mars Bar. Identifying George Orwell with Eric Blair doesn't need a 'respect'.
10. Modality / A. Necessity / 6. Logical Necessity
The idea that anything which can be proved is necessary has a problem with empty names [Bostock]
     Full Idea: The common Rule of Necessitation says that what can be proved is necessary, but this is incorrect if we do not permit empty names. The most straightforward answer is to modify elementary logic so that only necessary truths can be proved.
     From: David Bostock (Intermediate Logic [1997], 8.4)
11. Knowledge Aims / B. Certain Knowledge / 3. Fallibilism
Infallibility in science is just a joke [Peirce]
     Full Idea: Infallibility in scientific matters seems to me irresistibly comical.
     From: Charles Sanders Peirce (Concerning the Author [1897], p.3)
12. Knowledge Sources / D. Empiricism / 2. Associationism
Association of ideas is the best philosophical idea of the prescientific age [Peirce]
     Full Idea: The doctrine of the association of ideas is, to my thinking, the finest piece of philosophical work of the prescientific ages.
     From: Charles Sanders Peirce (Concerning the Author [1897], p.2)
14. Science / B. Scientific Theories / 1. Scientific Theory
Duns Scotus offers perhaps the best logic and metaphysics for modern physical science [Peirce]
     Full Idea: The works of Duns Scotus have strongly influenced me. …His logic and metaphysics, torn away from its medievalism, …will go far toward supplying the philosophy which is best to harmonize with physical science.
     From: Charles Sanders Peirce (Concerning the Author [1897], p.2)
19. Language / C. Assigning Meanings / 3. Predicates
A (modern) predicate is the result of leaving a gap for the name in a sentence [Bostock]
     Full Idea: A simple way of approaching the modern notion of a predicate is this: given any sentence which contains a name, the result of dropping that name and leaving a gap in its place is a predicate. Very different from predicates in Aristotle and Kant.
     From: David Bostock (Intermediate Logic [1997], 3.2)
     A reaction: This concept derives from Frege. To get to grips with contemporary philosophy you have to relearn all sorts of basic words like 'predicate' and 'object'.