Combining Texts

All the ideas for 'fragments/reports', 'Foundations without Foundationalism' and 'Intermediate Logic'

unexpand these ideas     |    start again     |     specify just one area for these texts


123 ideas

3. Truth / F. Semantic Truth / 1. Tarski's Truth / b. Satisfaction and truth
Satisfaction is 'truth in a model', which is a model of 'truth' [Shapiro]
     Full Idea: In a sense, satisfaction is the notion of 'truth in a model', and (as Hodes 1984 elegantly puts it) 'truth in a model' is a model of 'truth'.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 1.1)
     A reaction: So we can say that Tarski doesn't offer a definition of truth itself, but replaces it with a 'model' of truth.
4. Formal Logic / A. Syllogistic Logic / 1. Aristotelian Logic
Aristotelian logic is complete [Shapiro]
     Full Idea: Aristotelian logic is complete.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 2.5)
     A reaction: [He cites Corcoran 1972]
4. Formal Logic / A. Syllogistic Logic / 2. Syllogistic Logic
Venn Diagrams map three predicates into eight compartments, then look for the conclusion [Bostock]
     Full Idea: Venn Diagrams are a traditional method to test validity of syllogisms. There are three interlocking circles, one for each predicate, thus dividing the universe into eight possible basic elementary quantifications. Is the conclusion in a compartment?
     From: David Bostock (Intermediate Logic [1997], 3.8)
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / b. Terminology of PL
'Disjunctive Normal Form' is ensuring that no conjunction has a disjunction within its scope [Bostock]
     Full Idea: 'Disjunctive Normal Form' (DNF) is rearranging the occurrences of ∧ and ∨ so that no conjunction sign has any disjunction in its scope. This is achieved by applying two of the distribution laws.
     From: David Bostock (Intermediate Logic [1997], 2.6)
'Conjunctive Normal Form' is ensuring that no disjunction has a conjunction within its scope [Bostock]
     Full Idea: 'Conjunctive Normal Form' (CNF) is rearranging the occurrences of ∧ and ∨ so that no disjunction sign has any conjunction in its scope. This is achieved by applying two of the distribution laws.
     From: David Bostock (Intermediate Logic [1997], 2.6)
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / d. Basic theorems of PL
'Disjunction' says that Γ,φ∨ψ|= iff Γ,φ|= and Γ,ψ|= [Bostock]
     Full Idea: The Principle of Disjunction says that Γ,φ∨ψ |= iff Γ,φ |= and Γ,ψ |=.
     From: David Bostock (Intermediate Logic [1997], 2.5.G)
     A reaction: That is, a disjunction leads to a contradiction if they each separately lead to contradictions.
The 'conditional' is that Γ|=φ→ψ iff Γ,φ|=ψ [Bostock]
     Full Idea: The Conditional Principle says that Γ |= φ→ψ iff Γ,φ |= ψ. With the addition of negation, this implies φ,φ→ψ |= ψ, which is 'modus ponens'.
     From: David Bostock (Intermediate Logic [1997], 2.5.H)
     A reaction: [Second half is in Ex. 2.5.4]
'Assumptions' says that a formula entails itself (φ|=φ) [Bostock]
     Full Idea: The Principle of Assumptions says that any formula entails itself, i.e. φ |= φ. The principle depends just upon the fact that no interpretation assigns both T and F to the same formula.
     From: David Bostock (Intermediate Logic [1997], 2.5.A)
     A reaction: Thus one can introduce φ |= φ into any proof, and then use it to build more complex sequents needed to attain a particular target formula. Bostock's principle is more general than anything in Lemmon.
'Thinning' allows that if premisses entail a conclusion, then adding further premisses makes no difference [Bostock]
     Full Idea: The Principle of Thinning says that if a set of premisses entails a conclusion, then adding further premisses will still entail the conclusion. It is 'thinning' because it makes a weaker claim. If γ|=φ then γ,ψ|= φ.
     From: David Bostock (Intermediate Logic [1997], 2.5.B)
     A reaction: It is also called 'premise-packing'. It is the characteristic of a 'monotonic' logic - where once something is proved, it stays proved, whatever else is introduced.
'Cutting' allows that if x is proved, and adding y then proves z, you can go straight to z [Bostock]
     Full Idea: The Principle of Cutting is the general point that entailment is transitive, extending this to cover entailments with more than one premiss. Thus if γ |= φ and φ,Δ |= ψ then γ,Δ |= ψ. Here φ has been 'cut out'.
     From: David Bostock (Intermediate Logic [1997], 2.5.C)
     A reaction: It might be called the Principle of Shortcutting, since you can get straight to the last conclusion, eliminating the intermediate step.
'Negation' says that Γ,¬φ|= iff Γ|=φ [Bostock]
     Full Idea: The Principle of Negation says that Γ,¬φ |= iff Γ |= φ. We also say that φ,¬φ |=, and hence by 'thinning on the right' that φ,¬φ |= ψ, which is 'ex falso quodlibet'.
     From: David Bostock (Intermediate Logic [1997], 2.5.E)
     A reaction: That is, roughly, if the formula gives consistency, the negation gives contradiction. 'Ex falso' says that anything will follow from a contradiction.
'Conjunction' says that Γ|=φ∧ψ iff Γ|=φ and Γ|=ψ [Bostock]
     Full Idea: The Principle of Conjunction says that Γ |= φ∧ψ iff Γ |= φ and Γ |= ψ. This implies φ,ψ |= φ∧ψ, which is ∧-introduction. It is also implies ∧-elimination.
     From: David Bostock (Intermediate Logic [1997], 2.5.F)
     A reaction: [Second half is Ex. 2.5.3] That is, if they are entailed separately, they are entailed as a unit. It is a moot point whether these principles are theorems of propositional logic, or derivation rules.
4. Formal Logic / B. Propositional Logic PL / 2. Tools of Propositional Logic / e. Axioms of PL
A logic with ¬ and → needs three axiom-schemas and one rule as foundation [Bostock]
     Full Idea: For ¬,→ Schemas: (A1) |-φ→(ψ→φ), (A2) |-(φ→(ψ→ξ)) → ((φ→ψ)→(φ→ξ)), (A3) |-(¬φ→¬ψ) → (ψ→φ), Rule:DET:|-φ,|-φ→ψ then |-ψ
     From: David Bostock (Intermediate Logic [1997], 5.2)
     A reaction: A1 says everything implies a truth, A2 is conditional proof, and A3 is contraposition. DET is modus ponens. This is Bostock's compact near-minimal axiom system for proposition logic. He adds two axioms and another rule for predicate logic.
4. Formal Logic / E. Nonclassical Logics / 6. Free Logic
A 'free' logic can have empty names, and a 'universally free' logic can have empty domains [Bostock]
     Full Idea: A 'free' logic is one in which names are permitted to be empty. A 'universally free' logic is one in which the domain of an interpretation may also be empty.
     From: David Bostock (Intermediate Logic [1997], 8.6)
4. Formal Logic / F. Set Theory ST / 3. Types of Set / a. Types of set
A set is 'transitive' if contains every member of each of its members [Shapiro]
     Full Idea: If, for every b∈d, a∈b entails that a∈d, the d is said to be 'transitive'. In other words, d is transitive if it contains every member of each of its members.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 4.2)
     A reaction: The alternative would be that the members of the set are subsets, but the members of those subsets are not themselves members of the higher-level set.
4. Formal Logic / F. Set Theory ST / 4. Axioms for Sets / j. Axiom of Choice IX
Choice is essential for proving downward Löwenheim-Skolem [Shapiro]
     Full Idea: The axiom of choice is essential for proving the downward Löwenheim-Skolem Theorem.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 4.1)
4. Formal Logic / F. Set Theory ST / 5. Conceptions of Set / a. Sets as existing
Are sets part of logic, or part of mathematics? [Shapiro]
     Full Idea: Is there a notion of set in the jurisdiction of logic, or does it belong to mathematics proper?
     From: Stewart Shapiro (Foundations without Foundationalism [1991], Pref)
     A reaction: It immediately strikes me that they might be neither. I don't see that relations between well-defined groups of things must involve number, and I don't see that mapping the relations must intrinsically involve logical consequence or inference.
4. Formal Logic / F. Set Theory ST / 5. Conceptions of Set / e. Iterative sets
Russell's paradox shows that there are classes which are not iterative sets [Shapiro]
     Full Idea: The argument behind Russell's paradox shows that in set theory there are logical sets (i.e. classes) that are not iterative sets.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 1.3)
     A reaction: In his preface, Shapiro expresses doubts about the idea of a 'logical set'. Hence the theorists like the iterative hierarchy because it is well-founded and under control, not because it is comprehensive in scope. See all of pp.19-20.
It is central to the iterative conception that membership is well-founded, with no infinite descending chains [Shapiro]
     Full Idea: In set theory it is central to the iterative conception that the membership relation is well-founded, ...which means there are no infinite descending chains from any relation.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 5.1.4)
Iterative sets are not Boolean; the complement of an iterative set is not an iterative sets [Shapiro]
     Full Idea: Iterative sets do not exhibit a Boolean structure, because the complement of an iterative set is not itself an iterative set.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 7.1)
4. Formal Logic / F. Set Theory ST / 6. Ordering in Sets
'Well-ordering' of a set is an irreflexive, transitive, and binary relation with a least element [Shapiro]
     Full Idea: A 'well-ordering' of a set X is an irreflexive, transitive, and binary relation on X in which every non-empty subset of X has a least element.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 5.1.3)
     A reaction: So there is a beginning, an ongoing sequence, and no retracing of steps.
5. Theory of Logic / A. Overview of Logic / 1. Overview of Logic
There is no 'correct' logic for natural languages [Shapiro]
     Full Idea: There is no question of finding the 'correct' or 'true' logic underlying a part of natural language.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], Pref)
     A reaction: One needs the context of Shapiro's defence of second-order logic to see his reasons for this. Call me romantic, but I retain faith that there is one true logic. The Kennedy Assassination problem - can't see the truth because drowning in evidence.
Logic is the ideal for learning new propositions on the basis of others [Shapiro]
     Full Idea: A logic can be seen as the ideal of what may be called 'relative justification', the process of coming to know some propositions on the basis of others.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 2.3.1)
     A reaction: This seems to be the modern idea of logic, as opposed to identification of a set of 'logical truths' from which eternal necessities (such as mathematics) can be derived. 'Know' implies that they are true - which conclusions may not be.
5. Theory of Logic / A. Overview of Logic / 2. History of Logic
Skolem and Gödel championed first-order, and Zermelo, Hilbert, and Bernays championed higher-order [Shapiro]
     Full Idea: Skolem and Gödel were the main proponents of first-order languages. The higher-order language 'opposition' was championed by Zermelo, Hilbert, and Bernays.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 7.2)
Bernays (1918) formulated and proved the completeness of propositional logic [Shapiro]
     Full Idea: Bernays (1918) formulated and proved the completeness of propositional logic, the first precise solution as part of the Hilbert programme.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 7.2.1)
Can one develop set theory first, then derive numbers, or are numbers more basic? [Shapiro]
     Full Idea: In 1910 Weyl observed that set theory seemed to presuppose natural numbers, and he regarded numbers as more fundamental than sets, as did Fraenkel. Dedekind had developed set theory independently, and used it to formulate numbers.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 7.2.2)
5. Theory of Logic / A. Overview of Logic / 5. First-Order Logic
The 'triumph' of first-order logic may be related to logicism and the Hilbert programme, which failed [Shapiro]
     Full Idea: The 'triumph' of first-order logic may be related to the remnants of failed foundationalist programmes early this century - logicism and the Hilbert programme.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], Pref)
     A reaction: Being complete must also be one of its attractions, and Quine seems to like it because of its minimal ontological commitment.
Maybe compactness, semantic effectiveness, and the Löwenheim-Skolem properties are desirable [Shapiro]
     Full Idea: Tharp (1975) suggested that compactness, semantic effectiveness, and the Löwenheim-Skolem properties are consequences of features one would want a logic to have.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 6.5)
     A reaction: I like this proposal, though Shapiro is strongly against. We keep extending our logic so that we can prove new things, but why should we assume that we can prove everything? That's just what Gödel suggests that we should give up on.
First-order logic was an afterthought in the development of modern logic [Shapiro]
     Full Idea: Almost all the systems developed in the first part of the twentieth century are higher-order; first-order logic was an afterthought.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 7.1)
The notion of finitude is actually built into first-order languages [Shapiro]
     Full Idea: The notion of finitude is explicitly 'built in' to the systems of first-order languages in one way or another.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 9.1)
     A reaction: Personally I am inclined to think that they are none the worse for that. No one had even thought of all these lovely infinities before 1870, and now we are supposed to change our logic (our actual logic!) to accommodate them. Cf quantum logic.
5. Theory of Logic / A. Overview of Logic / 6. Classical Logic
Truth is the basic notion in classical logic [Bostock]
     Full Idea: The most fundamental notion in classical logic is that of truth.
     From: David Bostock (Intermediate Logic [1997], 1.1)
     A reaction: The opening sentence of his book. Hence the first half of the book is about semantics, and only the second half deals with proof. Compare Idea 10282. The thought seems to be that you could leave out truth, but that makes logic pointless.
Elementary logic cannot distinguish clearly between the finite and the infinite [Bostock]
     Full Idea: In very general terms, we cannot express the distinction between what is finite and what is infinite without moving essentially beyond the resources available in elementary logic.
     From: David Bostock (Intermediate Logic [1997], 4.8)
     A reaction: This observation concludes a discussion of Compactness in logic.
Fictional characters wreck elementary logic, as they have contradictions and no excluded middle [Bostock]
     Full Idea: Discourse about fictional characters leads to a breakdown of elementary logic. We accept P or ¬P if the relevant story says so, but P∨¬P will not be true if the relevant story says nothing either way, and P∧¬P is true if the story is inconsistent.
     From: David Bostock (Intermediate Logic [1997], 8.5)
     A reaction: I really like this. Does one need to invent a completely new logic for fictional characters? Or must their logic be intuitionist, or paraconsistent, or both?
5. Theory of Logic / A. Overview of Logic / 7. Second-Order Logic
Second-order logic is better than set theory, since it only adds relations and operations, and nothing else [Shapiro, by Lavine]
     Full Idea: Shapiro preferred second-order logic to set theory because second-order logic refers only to the relations and operations in a domain, and not to the other things that set-theory brings with it - other domains, higher-order relations, and so forth.
     From: report of Stewart Shapiro (Foundations without Foundationalism [1991]) by Shaughan Lavine - Understanding the Infinite VII.4
Broad standard semantics, or Henkin semantics with a subclass, or many-sorted first-order semantics? [Shapiro]
     Full Idea: Three systems of semantics for second-order languages: 'standard semantics' (variables cover all relations and functions), 'Henkin semantics' (relations and functions are a subclass) and 'first-order semantics' (many-sorted domains for variable-types).
     From: Stewart Shapiro (Foundations without Foundationalism [1991], Pref)
     A reaction: [my summary]
In standard semantics for second-order logic, a single domain fixes the ranges for the variables [Shapiro]
     Full Idea: In the standard semantics of second-order logic, by fixing a domain one thereby fixes the range of both the first-order variables and the second-order variables. There is no further 'interpreting' to be done.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 3.3)
     A reaction: This contrasts with 'Henkin' semantics (Idea 13650), or first-order semantics, which involve more than one domain of quantification.
Henkin semantics has separate variables ranging over the relations and over the functions [Shapiro]
     Full Idea: In 'Henkin' semantics, in a given model the relation variables range over a fixed collection of relations D on the domain, and the function variables range over a collection of functions F on the domain.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 3.3)
Completeness, Compactness and Löwenheim-Skolem fail in second-order standard semantics [Shapiro]
     Full Idea: The counterparts of Completeness, Compactness and the Löwenheim-Skolem theorems all fail for second-order languages with standard semantics, but hold for Henkin or first-order semantics. Hence such logics are much like first-order logic.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 4.1)
     A reaction: Shapiro votes for the standard semantics, because he wants the greater expressive power, especially for the characterization of infinite structures.
5. Theory of Logic / B. Logical Consequence / 3. Deductive Consequence |-
The syntactic turnstile |- φ means 'there is a proof of φ' or 'φ is a theorem' [Bostock]
     Full Idea: The syntactic turnstile |- φ means 'There is a proof of φ' (in the system currently being considered). Another way of saying the same thing is 'φ is a theorem'.
     From: David Bostock (Intermediate Logic [1997], 5.1)
5. Theory of Logic / B. Logical Consequence / 4. Semantic Consequence |=
Γ|=φ is 'entails'; Γ|= is 'is inconsistent'; |=φ is 'valid' [Bostock]
     Full Idea: If we write Γ |= φ, with one formula to the right, then the turnstile abbreviates 'entails'. For a sequent of the form Γ |= it can be read as 'is inconsistent'. For |= φ we read it as 'valid'.
     From: David Bostock (Intermediate Logic [1997], 1.3)
Validity is a conclusion following for premises, even if there is no proof [Bostock]
     Full Idea: The classical definition of validity counts an argument as valid if and only if the conclusion does in fact follow from the premises, whether or not the argument contains any demonstration of this fact.
     From: David Bostock (Intermediate Logic [1997], 1.2)
     A reaction: Hence validity is given by |= rather than by |-. A common example is 'it is red so it is coloured', which seems true but beyond proof. In the absence of formal proof, you wonder whether validity is merely a psychological notion.
It seems more natural to express |= as 'therefore', rather than 'entails' [Bostock]
     Full Idea: In practice we avoid quotation marks and explicitly set-theoretic notation that explaining |= as 'entails' appears to demand. Hence it seems more natural to explain |= as simply representing the word 'therefore'.
     From: David Bostock (Intermediate Logic [1997], 1.3)
     A reaction: Not sure I quite understand that, but I have trained myself to say 'therefore' for the generic use of |=. In other consequences it seems better to read it as 'semantic consequence', to distinguish it from |-.
If a logic is incomplete, its semantic consequence relation is not effective [Shapiro]
     Full Idea: Second-order logic is inherently incomplete, so its semantic consequence relation is not effective.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 1.2.1)
Semantic consequence is ineffective in second-order logic [Shapiro]
     Full Idea: It follows from Gödel's incompleteness theorem that the semantic consequence relation of second-order logic is not effective. For example, the set of logical truths of any second-order logic is not recursively enumerable. It is not even arithmetic.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], Pref)
     A reaction: I don't fully understand this, but it sounds rather major, and a good reason to avoid second-order logic (despite Shapiro's proselytising). See Peter Smith on 'effectively enumerable'.
5. Theory of Logic / B. Logical Consequence / 5. Modus Ponens
MPP is a converse of Deduction: If Γ |- φ→ψ then Γ,φ|-ψ [Bostock]
     Full Idea: Modus Ponens is equivalent to the converse of the Deduction Theorem, namely 'If Γ |- φ→ψ then Γ,φ|-ψ'.
     From: David Bostock (Intermediate Logic [1997], 5.3)
     A reaction: See 13615 for details of the Deduction Theorem. See 13614 for Modus Ponens.
MPP: 'If Γ|=φ and Γ|=φ→ψ then Γ|=ψ' (omit Γs for Detachment) [Bostock]
     Full Idea: The Rule of Detachment is a version of Modus Ponens, and says 'If |=φ and |=φ→ψ then |=ψ'. This has no assumptions. Modus Ponens is the more general rule that 'If Γ|=φ and Γ|=φ→ψ then Γ|=ψ'.
     From: David Bostock (Intermediate Logic [1997], 5.3)
     A reaction: Modus Ponens is actually designed for use in proof based on assumptions (which isn't always the case). In Detachment the formulae are just valid, without dependence on assumptions to support them.
5. Theory of Logic / D. Assumptions for Logic / 4. Identity in Logic
The sign '=' is a two-place predicate expressing that 'a is the same thing as b' (a=b) [Bostock]
     Full Idea: We shall use 'a=b' as short for 'a is the same thing as b'. The sign '=' thus expresses a particular two-place predicate. Officially we will use 'I' as the identity predicate, so that 'Iab' is as formula, but we normally 'abbreviate' this to 'a=b'.
     From: David Bostock (Intermediate Logic [1997], 8.1)
|= α=α and α=β |= φ(α/ξ ↔ φ(β/ξ) fix identity [Bostock]
     Full Idea: We usually take these two principles together as the basic principles of identity: |= α=α and α=β |= φ(α/ξ) ↔ φ(β/ξ). The second (with scant regard for history) is known as Leibniz's Law.
     From: David Bostock (Intermediate Logic [1997], 8.1)
If we are to express that there at least two things, we need identity [Bostock]
     Full Idea: To say that there is at least one thing x such that Fx we need only use an existential quantifier, but to say that there are at least two things we need identity as well.
     From: David Bostock (Intermediate Logic [1997], 8.1)
     A reaction: The only clear account I've found of why logic may need to be 'with identity'. Without it, you can only reason about one thing or all things. Presumably plural quantification no longer requires '='?
5. Theory of Logic / E. Structures of Logic / 1. Logical Form
Finding the logical form of a sentence is difficult, and there are no criteria of correctness [Shapiro]
     Full Idea: It is sometimes difficult to find a formula that is a suitable counterpart of a particular sentence of natural language, and there is no acclaimed criterion for what counts as a good, or even acceptable, 'translation'.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 1.1)
5. Theory of Logic / E. Structures of Logic / 2. Logical Connectives / a. Logical connectives
Truth-functors are usually held to be defined by their truth-tables [Bostock]
     Full Idea: The usual view of the meaning of truth-functors is that each is defined by its own truth-table, independently of any other truth-functor.
     From: David Bostock (Intermediate Logic [1997], 2.7)
5. Theory of Logic / E. Structures of Logic / 5. Functions in Logic
A 'total' function ranges over the whole domain, a 'partial' function over appropriate inputs [Bostock]
     Full Idea: Usually we allow that a function is defined for arguments of a suitable kind (a 'partial' function), but we can say that each function has one value for any object whatever, from the whole domain that our quantifiers range over (a 'total' function).
     From: David Bostock (Intermediate Logic [1997], 8.2)
     A reaction: He points out (p.338) that 'the father of..' is a functional expression, but it wouldn't normally take stones as input, so seems to be a partial function. But then it doesn't even take all male humans either. It only takes fathers!
A 'zero-place' function just has a single value, so it is a name [Bostock]
     Full Idea: We can talk of a 'zero-place' function, which is a new-fangled name for a familiar item; it just has a single value, and so it has the same role as a name.
     From: David Bostock (Intermediate Logic [1997], 8.2)
5. Theory of Logic / F. Referring in Logic / 1. Naming / a. Names
In logic, a name is just any expression which refers to a particular single object [Bostock]
     Full Idea: The important thing about a name, for logical purposes, is that it is used to make a singular reference to a particular object; ..we say that any expression too may be counted as a name, for our purposes, it it too performs the same job.
     From: David Bostock (Intermediate Logic [1997], 3.1)
     A reaction: He cites definite descriptions as the most notoriously difficult case, in deciding whether or not they function as names. I takes it as pretty obvious that sometimes they do and sometimes they don't (in ordinary usage).
5. Theory of Logic / F. Referring in Logic / 1. Naming / e. Empty names
An expression is only a name if it succeeds in referring to a real object [Bostock]
     Full Idea: An expression is not counted as a name unless it succeeds in referring to an object, i.e. unless there really is an object to which it refers.
     From: David Bostock (Intermediate Logic [1997], 3.1)
     A reaction: His 'i.e.' makes the existence condition sound sufficient, but in ordinary language you don't succeed in referring to 'that man over there' just because he exists. In modal contexts we presumably refer to hypothetical objects (pace Lewis).
5. Theory of Logic / F. Referring in Logic / 2. Descriptions / b. Definite descriptions
Definite desciptions resemble names, but can't actually be names, if they don't always refer [Bostock]
     Full Idea: Although a definite description looks like a complex name, and in many ways behaves like a name, still it cannot be a name if names must always refer to objects. Russell gave the first proposal for handling such expressions.
     From: David Bostock (Intermediate Logic [1997], 8.3)
     A reaction: I take the simple solution to be a pragmatic one, as roughly shown by Donnellan, that sometimes they are used exactly like names, and sometimes as something else. The same phrase can have both roles. Confusing for logicians. Tough.
Because of scope problems, definite descriptions are best treated as quantifiers [Bostock]
     Full Idea: Because of the scope problem, it now seems better to 'parse' definition descriptions not as names but as quantifiers. 'The' is to be treated in the same category as acknowledged quantifiers like 'all' and 'some'. We write Ix - 'for the x such that..'.
     From: David Bostock (Intermediate Logic [1997], 8.3)
     A reaction: This seems intuitively rather good, since quantification in normal speech is much more sophisticated than the crude quantification of classical logic. But the fact is that they often function as names (but see Idea 13817).
Definite descriptions are usually treated like names, and are just like them if they uniquely refer [Bostock]
     Full Idea: In practice, definite descriptions are for the most part treated as names, since this is by far the most convenient notation (even though they have scope). ..When a description is uniquely satisfied then it does behave like a name.
     From: David Bostock (Intermediate Logic [1997], 8.3)
     A reaction: Apparent names themselves have problems when they wander away from uniquely picking out one thing, as in 'John Doe'.
We are only obliged to treat definite descriptions as non-names if only the former have scope [Bostock]
     Full Idea: If it is really true that definite descriptions have scopes whereas names do not, then Russell must be right to claim that definite descriptions are not names. If, however, this is not true, then it does no harm to treat descriptions as complex names.
     From: David Bostock (Intermediate Logic [1997], 8.8)
Definite descriptions don't always pick out one thing, as in denials of existence, or errors [Bostock]
     Full Idea: It is natural to suppose one only uses a definite description when one believes it describes only one thing, but exceptions are 'there is no such thing as the greatest prime number', or saying something false where the reference doesn't occur.
     From: David Bostock (Intermediate Logic [1997], 8.3)
5. Theory of Logic / F. Referring in Logic / 2. Descriptions / c. Theory of definite descriptions
Names do not have scope problems (e.g. in placing negation), but Russell's account does have that problem [Bostock]
     Full Idea: In orthodox logic names are not regarded as having scope (for example, in where a negation is placed), whereas on Russell's theory definite descriptions certainly do. Russell had his own way of dealing with this.
     From: David Bostock (Intermediate Logic [1997], 8.3)
5. Theory of Logic / G. Quantification / 1. Quantification
'Prenex normal form' is all quantifiers at the beginning, out of the scope of truth-functors [Bostock]
     Full Idea: A formula is said to be in 'prenex normal form' (PNF) iff all its quantifiers occur in a block at the beginning, so that no quantifier is in the scope of any truth-functor.
     From: David Bostock (Intermediate Logic [1997], 3.7)
     A reaction: Bostock provides six equivalences which can be applied to manouevre any formula into prenex normal form. He proves that every formula can be arranged in PNF.
5. Theory of Logic / G. Quantification / 2. Domain of Quantification
If we allow empty domains, we must allow empty names [Bostock]
     Full Idea: We can show that if empty domains are permitted, then empty names must be permitted too.
     From: David Bostock (Intermediate Logic [1997], 8.4)
5. Theory of Logic / G. Quantification / 4. Substitutional Quantification
We might reduce ontology by using truth of sentences and terms, instead of using objects satisfying models [Shapiro]
     Full Idea: The main role of substitutional semantics is to reduce ontology. As an alternative to model-theoretic semantics for formal languages, the idea is to replace the 'satisfaction' relation of formulas (by objects) with the 'truth' of sentences (using terms).
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 9.1.4)
     A reaction: I find this very appealing, and Ruth Barcan Marcus is the person to look at. My intuition is that logic should have no ontology at all, as it is just about how inference works, not about how things are. Shapiro offers a compromise.
5. Theory of Logic / H. Proof Systems / 1. Proof Systems
An 'informal proof' is in no particular system, and uses obvious steps and some ordinary English [Bostock]
     Full Idea: An 'informal proof' is not in any particular proof system. One may use any rule of proof that is 'sufficiently obvious', and there is quite a lot of ordinary English in the proof, explaining what is going on at each step.
     From: David Bostock (Intermediate Logic [1997], 8.1)
5. Theory of Logic / H. Proof Systems / 2. Axiomatic Proof
Quantification adds two axiom-schemas and a new rule [Bostock]
     Full Idea: New axiom-schemas for quantifiers: (A4) |-∀ξφ → φ(α/ξ), (A5) |-∀ξ(ψ→φ) → (ψ→∀ξφ), plus the rule GEN: If |-φ the |-∀ξφ(ξ/α).
     From: David Bostock (Intermediate Logic [1997], 5.6)
     A reaction: This follows on from Idea 13610, where he laid out his three axioms and one rule for propositional (truth-functional) logic. This Idea plus 13610 make Bostock's proposed axiomatisation of first-order logic.
Axiom systems from Frege, Russell, Church, Lukasiewicz, Tarski, Nicod, Kleene, Quine... [Bostock]
     Full Idea: Notably axiomatisations of first-order logic are by Frege (1879), Russell and Whitehead (1910), Church (1956), Lukasiewicz and Tarski (1930), Lukasiewicz (1936), Nicod (1917), Kleene (1952) and Quine (1951). Also Bostock (1997).
     From: David Bostock (Intermediate Logic [1997], 5.8)
     A reaction: My summary, from Bostock's appendix 5.8, which gives details of all of these nine systems. This nicely illustrates the status and nature of axiom systems, which have lost the absolute status they seemed to have in Euclid.
5. Theory of Logic / H. Proof Systems / 3. Proof from Assumptions
'Conditonalised' inferences point to the Deduction Theorem: If Γ,φ|-ψ then Γ|-φ→ψ [Bostock]
     Full Idea: If a group of formulae prove a conclusion, we can 'conditionalize' this into a chain of separate inferences, which leads to the Deduction Theorem (or Conditional Proof), that 'If Γ,φ|-ψ then Γ|-φ→ψ'.
     From: David Bostock (Intermediate Logic [1997], 5.3)
     A reaction: This is the rule CP (Conditional Proof) which can be found in the rules for propositional logic I transcribed from Lemmon's book.
Proof by Assumptions can always be reduced to Proof by Axioms, using the Deduction Theorem [Bostock]
     Full Idea: By repeated transformations using the Deduction Theorem, any proof from assumptions can be transformed into a fully conditionalized proof, which is then an axiomatic proof.
     From: David Bostock (Intermediate Logic [1997], 5.6)
     A reaction: Since proof using assumptions is perhaps the most standard proof system (e.g. used in Lemmon, for many years the standard book at Oxford University), the Deduction Theorem is crucial for giving it solid foundations.
The Deduction Theorem and Reductio can 'discharge' assumptions - they aren't needed for the new truth [Bostock]
     Full Idea: Like the Deduction Theorem, one form of Reductio ad Absurdum (If Γ,φ|-[absurdity] then Γ|-¬φ) 'discharges' an assumption. Assume φ and obtain a contradiction, then we know ¬&phi, without assuming φ.
     From: David Bostock (Intermediate Logic [1997], 5.7)
     A reaction: Thus proofs from assumption either arrive at conditional truths, or at truths that are true irrespective of what was initially assumed.
The Deduction Theorem greatly simplifies the search for proof [Bostock]
     Full Idea: Use of the Deduction Theorem greatly simplifies the search for proof (or more strictly, the task of showing that there is a proof).
     From: David Bostock (Intermediate Logic [1997], 5.3)
     A reaction: See 13615 for details of the Deduction Theorem. Bostock is referring to axiomatic proof, where it can be quite hard to decide which axioms are relevant. The Deduction Theorem enables the making of assumptions.
5. Theory of Logic / H. Proof Systems / 4. Natural Deduction
Natural deduction takes proof from assumptions (with its rules) as basic, and axioms play no part [Bostock]
     Full Idea: Natural deduction takes the notion of proof from assumptions as a basic notion, ...so it will use rules for use in proofs from assumptions, and axioms (as traditionally understood) will have no role to play.
     From: David Bostock (Intermediate Logic [1997], 6.1)
     A reaction: The main rules are those for introduction and elimination of truth functors.
Excluded middle is an introduction rule for negation, and ex falso quodlibet will eliminate it [Bostock]
     Full Idea: Many books take RAA (reductio) and DNE (double neg) as the natural deduction introduction- and elimination-rules for negation, but RAA is not a natural introduction rule. I prefer TND (tertium) and EFQ (ex falso) for ¬-introduction and -elimination.
     From: David Bostock (Intermediate Logic [1997], 6.2)
In natural deduction we work from the premisses and the conclusion, hoping to meet in the middle [Bostock]
     Full Idea: When looking for a proof of a sequent, the best we can do in natural deduction is to work simultaneously in both directions, forward from the premisses, and back from the conclusion, and hope they will meet in the middle.
     From: David Bostock (Intermediate Logic [1997], 6.5)
Natural deduction rules for → are the Deduction Theorem (→I) and Modus Ponens (→E) [Bostock]
     Full Idea: Natural deduction adopts for → as rules the Deduction Theorem and Modus Ponens, here called →I and →E. If ψ follows φ in the proof, we can write φ→ψ (→I). φ and φ→ψ permit ψ (→E).
     From: David Bostock (Intermediate Logic [1997], 6.2)
     A reaction: Natural deduction has this neat and appealing way of formally introducing or eliminating each connective, so that you know where you are, and you know what each one means.
5. Theory of Logic / H. Proof Systems / 5. Tableau Proof
A tree proof becomes too broad if its only rule is Modus Ponens [Bostock]
     Full Idea: When the only rule of inference is Modus Ponens, the branches of a tree proof soon spread too wide for comfort.
     From: David Bostock (Intermediate Logic [1997], 6.4)
Non-branching rules add lines, and branching rules need a split; a branch with a contradiction is 'closed' [Bostock]
     Full Idea: Rules for semantic tableaus are of two kinds - non-branching rules and branching rules. The first allow the addition of further lines, and the second requires splitting the branch. A branch which assigns contradictory values to a formula is 'closed'.
     From: David Bostock (Intermediate Logic [1997], 4.1)
     A reaction: [compressed] Thus 'and' stays on one branch, asserting both formulae, but 'or' splits, checking first one and then the other. A proof succeeds when all the branches are closed, showing that the initial assumption leads only to contradictions.
In a tableau proof no sequence is established until the final branch is closed; hypotheses are explored [Bostock]
     Full Idea: In a tableau system no sequent is established until the final step of the proof, when the last branch closes, and until then we are simply exploring a hypothesis.
     From: David Bostock (Intermediate Logic [1997], 7.3)
     A reaction: This compares sharply with a sequence calculus, where every single step is a conclusive proof of something. So use tableaux for exploring proofs, and then sequence calculi for writing them up?
Tableau proofs use reduction - seeking an impossible consequence from an assumption [Bostock]
     Full Idea: A tableau proof is a proof by reduction ad absurdum. One begins with an assumption, and one develops the consequences of that assumption, seeking to derive an impossible consequence.
     From: David Bostock (Intermediate Logic [1997], 4.1)
A completed open branch gives an interpretation which verifies those formulae [Bostock]
     Full Idea: An open branch in a completed tableau will always yield an interpretation that verifies every formula on the branch.
     From: David Bostock (Intermediate Logic [1997], 4.7)
     A reaction: In other words the open branch shows a model which seems to work (on the available information). Similarly a closed branch gives a model which won't work - a counterexample.
Unlike natural deduction, semantic tableaux have recipes for proving things [Bostock]
     Full Idea: With semantic tableaux there are recipes for proof-construction that we can operate, whereas with natural deduction there are not.
     From: David Bostock (Intermediate Logic [1997], 6.5)
Tableau rules are all elimination rules, gradually shortening formulae [Bostock]
     Full Idea: In their original setting, all the tableau rules are elimination rules, allowing us to replace a longer formula by its shorter components.
     From: David Bostock (Intermediate Logic [1997], 7.3)
5. Theory of Logic / H. Proof Systems / 6. Sequent Calculi
A sequent calculus is good for comparing proof systems [Bostock]
     Full Idea: A sequent calculus is a useful tool for comparing two systems that at first look utterly different (such as natural deduction and semantic tableaux).
     From: David Bostock (Intermediate Logic [1997], 7.2)
Each line of a sequent calculus is a conclusion of previous lines, each one explicitly recorded [Bostock]
     Full Idea: A sequent calculus keeps an explicit record of just what sequent is established at each point in a proof. Every line is itself the sequent proved at that point. It is not a linear sequence or array of formulae, but a matching array of whole sequents.
     From: David Bostock (Intermediate Logic [1997], 7.1)
5. Theory of Logic / I. Semantics of Logic / 1. Semantics of Logic
Interpretation by assigning objects to names, or assigning them to variables first [Bostock, by PG]
     Full Idea: There are two approaches to an 'interpretation' of a logic: the first method assigns objects to names, and then defines connectives and quantifiers, focusing on truth; the second assigns objects to variables, then variables to names, using satisfaction.
     From: report of David Bostock (Intermediate Logic [1997], 3.4) by PG - Db (lexicon)
     A reaction: [a summary of nine elusive pages in Bostock] He says he prefers the first method, but the second method is more popular because it handles open formulas, by treating free variables as if they were names.
5. Theory of Logic / I. Semantics of Logic / 4. Satisfaction
'Satisfaction' is a function from models, assignments, and formulas to {true,false} [Shapiro]
     Full Idea: The 'satisfaction' relation may be thought of as a function from models, assignments, and formulas to the truth values {true,false}.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 1.1)
     A reaction: This at least makes clear that satisfaction is not the same as truth. Now you have to understand how Tarski can define truth in terms of satisfaction.
5. Theory of Logic / I. Semantics of Logic / 5. Extensionalism
Extensionality is built into ordinary logic semantics; names have objects, predicates have sets of objects [Bostock]
     Full Idea: Extensionality is built into the semantics of ordinary logic. When a name-letter is interpreted as denoting something, we just provide the object denoted. All that we provide for a one-place predicate-letter is the set of objects that it is true of..
     From: David Bostock (Intermediate Logic [1997])
     A reaction: Could we keep the syntax of ordinary logic, and provide a wildly different semantics, much closer to real life? We could give up these dreadful 'objects' that Frege lumbered us with. Logic for processes, etc.
If an object has two names, truth is undisturbed if the names are swapped; this is Extensionality [Bostock]
     Full Idea: If two names refer to the same object, then in any proposition which contains either of them the other may be substituted in its place, and the truth-value of the proposition of the proposition will be unaltered. This is the Principle of Extensionality.
     From: David Bostock (Intermediate Logic [1997], 3.1)
     A reaction: He acknowledges that ordinary language is full of counterexamples, such as 'he doesn't know the Morning Star and the Evening Star are the same body' (when he presumably knows that the Morning Star is the Morning Star). This is logic. Like maths.
5. Theory of Logic / J. Model Theory in Logic / 1. Logical Models
Semantics for models uses set-theory [Shapiro]
     Full Idea: Typically, model-theoretic semantics is formulated in set theory.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 2.5.1)
5. Theory of Logic / J. Model Theory in Logic / 2. Isomorphisms
An axiomatization is 'categorical' if its models are isomorphic, so there is really only one interpretation [Shapiro]
     Full Idea: An axiomatization is 'categorical' if all its models are isomorphic to one another; ..hence it has 'essentially only one' interpretation [Veblen 1904].
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 1.2.1)
Categoricity can't be reached in a first-order language [Shapiro]
     Full Idea: Categoricity cannot be attained in a first-order language.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 7.3)
5. Theory of Logic / J. Model Theory in Logic / 3. Löwenheim-Skolem Theorems
Downward Löwenheim-Skolem: each satisfiable countable set always has countable models [Shapiro]
     Full Idea: A language has the Downward Löwenheim-Skolem property if each satisfiable countable set of sentences has a model whose domain is at most countable.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 6.5)
     A reaction: This means you can't employ an infinite model to represent a fact about a countable set.
The Löwenheim-Skolem theorems show an explosion of infinite models, so 1st-order is useless for infinity [Shapiro]
     Full Idea: The Löwenheim-Skolem theorems mean that no first-order theory with an infinite model is categorical. If Γ has an infinite model, then it has a model of every infinite cardinality. So first-order languages cannot characterize infinite structures.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 4.1)
     A reaction: So much of the debate about different logics hinges on characterizing 'infinite structures' - whatever they are! Shapiro is a leading structuralist in mathematics, so he wants second-order logic to help with his project.
Upward Löwenheim-Skolem: each infinite model has infinite models of all sizes [Shapiro]
     Full Idea: A language has the Upward Löwenheim-Skolem property if for each set of sentences whose model has an infinite domain, then it has a model at least as big as each infinite cardinal.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 6.5)
     A reaction: This means you can't have a countable model to represent a fact about infinite sets.
Substitutional semantics only has countably many terms, so Upward Löwenheim-Skolem trivially fails [Shapiro]
     Full Idea: The Upward Löwenheim-Skolem theorem fails (trivially) with substitutional semantics. If there are only countably many terms of the language, then there are no uncountable substitution models.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 9.1.4)
     A reaction: Better and better. See Idea 13674. Why postulate more objects than you can possibly name? I'm even suspicious of all real numbers, because you can't properly define them in finite terms. Shapiro objects that the uncountable can't be characterized.
5. Theory of Logic / K. Features of Logics / 2. Consistency
A set of formulae is 'inconsistent' when there is no interpretation which can make them all true [Bostock]
     Full Idea: 'Γ |=' means 'Γ is a set of closed formulae, and there is no (standard) interpretation in which all of the formulae in Γ are true'. We abbreviate this last to 'Γ is inconsistent'.
     From: David Bostock (Intermediate Logic [1997], 4.5)
     A reaction: This is a semantic approach to inconsistency, in terms of truth, as opposed to saying that we cannot prove both p and ¬p. I take this to be closer to the true concept, since you need never have heard of 'proof' to understand 'inconsistent'.
For 'negation-consistent', there is never |-(S)φ and |-(S)¬φ [Bostock]
     Full Idea: Any system of proof S is said to be 'negation-consistent' iff there is no formula such that |-(S)φ and |-(S)¬φ.
     From: David Bostock (Intermediate Logic [1997], 4.5)
     A reaction: Compare Idea 13542. This version seems to be a 'strong' version, as it demands a higher standard than 'absolute consistency'. Both halves of the condition would have to be established.
A proof-system is 'absolutely consistent' iff we don't have |-(S)φ for every formula [Bostock]
     Full Idea: Any system of proof S is said to be 'absolutely consistent' iff it is not the case that for every formula we have |-(S)φ.
     From: David Bostock (Intermediate Logic [1997], 4.5)
     A reaction: Bostock notes that a sound system will be both 'negation-consistent' (Idea 13541) and absolutely consistent. 'Tonk' systems can be shown to be unsound because the two come apart.
5. Theory of Logic / K. Features of Logics / 3. Soundness
'Weakly sound' if every theorem is a logical truth; 'sound' if every deduction is a semantic consequence [Shapiro]
     Full Idea: A logic is 'weakly sound' if every theorem is a logical truth, and 'strongly sound', or simply 'sound', if every deduction from Γ is a semantic consequence of Γ. Soundness indicates that the deductive system is faithful to the semantics.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 1.1)
     A reaction: Similarly, 'weakly complete' is when every logical truth is a theorem.
5. Theory of Logic / K. Features of Logics / 4. Completeness
We can live well without completeness in logic [Shapiro]
     Full Idea: We can live without completeness in logic, and live well.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], Pref)
     A reaction: This is the kind of heady suggestion that American philosophers love to make. Sounds OK to me, though. Our ability to draw good inferences should be expected to outrun our ability to actually prove them. Completeness is for wimps.
5. Theory of Logic / K. Features of Logics / 6. Compactness
Inconsistency or entailment just from functors and quantifiers is finitely based, if compact [Bostock]
     Full Idea: Being 'compact' means that if we have an inconsistency or an entailment which holds just because of the truth-functors and quantifiers involved, then it is always due to a finite number of the propositions in question.
     From: David Bostock (Intermediate Logic [1997], 4.8)
     A reaction: Bostock says this is surprising, given the examples 'a is not a parent of a parent of b...' etc, where an infinity seems to establish 'a is not an ancestor of b'. The point, though, is that this truth doesn't just depend on truth-functors and quantifiers.
Compactness means an infinity of sequents on the left will add nothing new [Bostock]
     Full Idea: The logic of truth-functions is compact, which means that sequents with infinitely many formulae on the left introduce nothing new. Hence we can confine our attention to finite sequents.
     From: David Bostock (Intermediate Logic [1997], 5.5)
     A reaction: This makes it clear why compactness is a limitation in logic. If you want the logic to be unlimited in scope, it isn't; it only proves things from finite numbers of sequents. This makes it easier to prove completeness for the system.
Non-compactness is a strength of second-order logic, enabling characterisation of infinite structures [Shapiro]
     Full Idea: It is sometimes said that non-compactness is a defect of second-order logic, but it is a consequence of a crucial strength - its ability to give categorical characterisations of infinite structures.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], Pref)
     A reaction: The dispute between fans of first- and second-order may hinge on their attitude to the infinite. I note that Skolem, who was not keen on the infinite, stuck to first-order. Should we launch a new Skolemite Crusade?
Compactness is derived from soundness and completeness [Shapiro]
     Full Idea: Compactness is a corollary of soundness and completeness. If Γ is not satisfiable, then, by completeness, Γ is not consistent. But the deductions contain only finite premises. So a finite subset shows the inconsistency.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 4.1)
     A reaction: [this is abbreviated, but a proof of compactness] Since all worthwhile logics are sound, this effectively means that completeness entails compactness.
5. Theory of Logic / K. Features of Logics / 9. Expressibility
A language is 'semantically effective' if its logical truths are recursively enumerable [Shapiro]
     Full Idea: A logical language is 'semantically effective' if the collection of logically true sentences is a recursively enumerable set of strings.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 6.5)
6. Mathematics / A. Nature of Mathematics / 3. Nature of Numbers / b. Types of number
Complex numbers can be defined as reals, which are defined as rationals, then integers, then naturals [Shapiro]
     Full Idea: 'Definitions' of integers as pairs of naturals, rationals as pairs of integers, reals as Cauchy sequences of rationals, and complex numbers as pairs of reals are reductive foundations of various fields.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 2.1)
     A reaction: On p.30 (bottom) Shapiro objects that in the process of reduction the numbers acquire properties they didn't have before.
6. Mathematics / A. Nature of Mathematics / 3. Nature of Numbers / d. Natural numbers
Only higher-order languages can specify that 0,1,2,... are all the natural numbers that there are [Shapiro]
     Full Idea: The main problem of characterizing the natural numbers is to state, somehow, that 0,1,2,.... are all the numbers that there are. We have seen that this can be accomplished with a higher-order language, but not in a first-order language.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 9.1.4)
6. Mathematics / A. Nature of Mathematics / 3. Nature of Numbers / e. Ordinal numbers
Natural numbers are the finite ordinals, and integers are equivalence classes of pairs of finite ordinals [Shapiro]
     Full Idea: By convention, the natural numbers are the finite ordinals, the integers are certain equivalence classes of pairs of finite ordinals, etc.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 9.3)
6. Mathematics / A. Nature of Mathematics / 5. The Infinite / g. Continuum Hypothesis
The 'continuum' is the cardinality of the powerset of a denumerably infinite set [Shapiro]
     Full Idea: The 'continuum' is the cardinality of the powerset of a denumerably infinite set.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 5.1.2)
6. Mathematics / B. Foundations for Mathematics / 4. Axioms for Number / d. Peano arithmetic
First-order arithmetic can't even represent basic number theory [Shapiro]
     Full Idea: Few theorists consider first-order arithmetic to be an adequate representation of even basic number theory.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 5 n28)
     A reaction: This will be because of Idea 13656. Even 'basic' number theory will include all sorts of vast infinities, and that seems to be where the trouble is.
6. Mathematics / B. Foundations for Mathematics / 4. Axioms for Number / f. Mathematical induction
Ordinary or mathematical induction assumes for the first, then always for the next, and hence for all [Bostock]
     Full Idea: The principle of mathematical (or ordinary) induction says suppose the first number, 0, has a property; suppose that if any number has that property, then so does the next; then it follows that all numbers have the property.
     From: David Bostock (Intermediate Logic [1997], 2.8)
     A reaction: Ordinary induction is also known as 'weak' induction. Compare Idea 13359 for 'strong' or complete induction. The number sequence must have a first element, so this doesn't work for the integers.
Complete induction assumes for all numbers less than n, then also for n, and hence for all numbers [Bostock]
     Full Idea: The principle of complete induction says suppose that for every number, if all the numbers less than it have a property, then so does it; it then follows that every number has the property.
     From: David Bostock (Intermediate Logic [1997], 2.8)
     A reaction: Complete induction is also known as 'strong' induction. Compare Idea 13358 for 'weak' or mathematical induction. The number sequence need have no first element.
6. Mathematics / B. Foundations for Mathematics / 6. Mathematics as Set Theory / a. Mathematics is set theory
Some sets of natural numbers are definable in set-theory but not in arithmetic [Shapiro]
     Full Idea: There are sets of natural numbers definable in set-theory but not in arithmetic.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 5.3.3)
6. Mathematics / C. Sources of Mathematics / 6. Logicism / c. Neo-logicism
Logicism is distinctive in seeking a universal language, and denying that logic is a series of abstractions [Shapiro]
     Full Idea: It is claimed that aiming at a universal language for all contexts, and the thesis that logic does not involve a process of abstraction, separates the logicists from algebraists and mathematicians, and also from modern model theory.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 7.1)
     A reaction: I am intuitively drawn to the idea that logic is essentially the result of a series of abstractions, so this gives me a further reason not to be a logicist. Shapiro cites Goldfarb 1979 and van Heijenoort 1967. Logicists reduce abstraction to logic.
6. Mathematics / C. Sources of Mathematics / 6. Logicism / d. Logicism critique
Mathematics and logic have no border, and logic must involve mathematics and its ontology [Shapiro]
     Full Idea: I extend Quinean holism to logic itself; there is no sharp border between mathematics and logic, especially the logic of mathematics. One cannot expect to do logic without incorporating some mathematics and accepting at least some of its ontology.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], Pref)
     A reaction: I have strong sales resistance to this proposal. Mathematics may have hijacked logic and warped it for its own evil purposes, but if logic is just the study of inferences then it must be more general than to apply specifically to mathematics.
6. Mathematics / C. Sources of Mathematics / 10. Constructivism / d. Predicativism
Some reject formal properties if they are not defined, or defined impredicatively [Shapiro]
     Full Idea: Some authors (Poincaré and Russell, for example) were disposed to reject properties that are not definable, or are definable only impredicatively.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 7.1)
     A reaction: I take Quine to be the culmination of this line of thought, with his general rejection of 'attributes' in logic and in metaphysics.
8. Modes of Existence / A. Relations / 4. Formal Relations / a. Types of relation
A relation is not reflexive, just because it is transitive and symmetrical [Bostock]
     Full Idea: It is easy to fall into the error of supposing that a relation which is both transitive and symmetrical must also be reflexive.
     From: David Bostock (Intermediate Logic [1997], 4.7)
     A reaction: Compare Idea 14430! Transivity will take you there, and symmetricality will get you back, but that doesn't entitle you to take the shortcut?
Relations can be one-many (at most one on the left) or many-one (at most one on the right) [Bostock]
     Full Idea: A relation is 'one-many' if for anything on the right there is at most one on the left (∀xyz(Rxz∧Ryz→x=y), and is 'many-one' if for anything on the left there is at most one on the right (∀xyz(Rzx∧Rzy→x=y).
     From: David Bostock (Intermediate Logic [1997], 8.1)
8. Modes of Existence / B. Properties / 10. Properties as Predicates
Properties are often seen as intensional; equiangular and equilateral are different, despite identity of objects [Shapiro]
     Full Idea: Properties are often taken to be intensional; equiangular and equilateral are thought to be different properties of triangles, even though any triangle is equilateral if and only if it is equiangular.
     From: Stewart Shapiro (Foundations without Foundationalism [1991], 1.3)
     A reaction: Many logicians seem to want to treat properties as sets of objects (red being just the set of red things), but this looks like a desperate desire to say everything in first-order logic, where only objects are available to quantify over.
9. Objects / F. Identity among Objects / 5. Self-Identity
If non-existent things are self-identical, they are just one thing - so call it the 'null object' [Bostock]
     Full Idea: If even non-existent things are still counted as self-identical, then all non-existent things must be counted as identical with one another, so there is at most one non-existent thing. We might arbitrarily choose zero, or invent 'the null object'.
     From: David Bostock (Intermediate Logic [1997], 8.6)
10. Modality / A. Necessity / 6. Logical Necessity
The idea that anything which can be proved is necessary has a problem with empty names [Bostock]
     Full Idea: The common Rule of Necessitation says that what can be proved is necessary, but this is incorrect if we do not permit empty names. The most straightforward answer is to modify elementary logic so that only necessary truths can be proved.
     From: David Bostock (Intermediate Logic [1997], 8.4)
19. Language / C. Assigning Meanings / 3. Predicates
A (modern) predicate is the result of leaving a gap for the name in a sentence [Bostock]
     Full Idea: A simple way of approaching the modern notion of a predicate is this: given any sentence which contains a name, the result of dropping that name and leaving a gap in its place is a predicate. Very different from predicates in Aristotle and Kant.
     From: David Bostock (Intermediate Logic [1997], 3.2)
     A reaction: This concept derives from Frege. To get to grips with contemporary philosophy you have to relearn all sorts of basic words like 'predicate' and 'object'.
21. Aesthetics / C. Artistic Issues / 7. Art and Morality
Musical performance can reveal a range of virtues [Damon of Ath.]
     Full Idea: In singing and playing the lyre, a boy will be likely to reveal not only courage and moderation, but also justice.
     From: Damon (fragments/reports [c.460 BCE], B4), quoted by (who?) - where?