Combining Texts

All the ideas for 'The Case for Closure', 'How to Define Theoretical Terms' and 'Isolation and Non-arbitrary Division'

unexpand these ideas     |    start again     |     specify just one area for these texts


15 ideas

2. Reason / D. Definition / 2. Aims of Definition
Defining terms either enables elimination, or shows that they don't require elimination [Lewis]
     Full Idea: To define theoretical terms might be to show how to do without them, but it is better to say that it shows there is no good reason to want to do without them.
     From: David Lewis (How to Define Theoretical Terms [1970], Intro)
6. Mathematics / A. Nature of Mathematics / 4. Using Numbers / a. Units
Objects do not naturally form countable units [Koslicki]
     Full Idea: Objects do not by themselves naturally fall into countable units.
     From: Kathrin Koslicki (Isolation and Non-arbitrary Division [1997], 2.2)
     A reaction: Hm. This seems to be modern Fregean orthodoxy. Why did the institution of counting ever get started if the things in the world didn't demand counting? Even birds are aware of the number of eggs in their nest (because they miss a stolen one).
6. Mathematics / A. Nature of Mathematics / 4. Using Numbers / c. Counting procedure
We can still count squares, even if they overlap [Koslicki]
     Full Idea: The fact that there is overlap does not seem to inhibit our ability to count squares.
     From: Kathrin Koslicki (Isolation and Non-arbitrary Division [1997], 2.2)
     A reaction: She has a diagram of three squares overlapping slightly at their corners. Contrary to Frege, these seems to depend on a subliminal concept of the square that doesn't depend on language.
There is no deep reason why we count carrots but not asparagus [Koslicki]
     Full Idea: Why do speakers of English count carrots but not asparagus? There is no 'deep' reason.
     From: Kathrin Koslicki (Isolation and Non-arbitrary Division [1997])
     A reaction: Koslick is offering this to defend the Fregean conceptual view of counting, but what seems to matter is what is countable, and not whether we happen to count it. You don't need to know what carrots are to count them. Cooks count asparagus.
6. Mathematics / A. Nature of Mathematics / 4. Using Numbers / d. Counting via concepts
We struggle to count branches and waves because our concepts lack clear boundaries [Koslicki]
     Full Idea: The reason we have a hard time counting the branches and the waves is because our concepts 'branches on the tree' and 'waves on the ocean' do not determine sufficiently precise boundaries: the concepts do not draw a clear invisible line around each thing.
     From: Kathrin Koslicki (Isolation and Non-arbitrary Division [1997], 2.2)
     A reaction: This is the 'isolation' referred to in Frege.
7. Existence / C. Structure of Existence / 8. Stuff / a. Pure stuff
We talk of snow as what stays the same, when it is a heap or drift or expanse [Koslicki]
     Full Idea: Talk of snow concerns what stays the same when some snow changes, as it might be, from a heap of snow to a drift, to an expanse.
     From: Kathrin Koslicki (Isolation and Non-arbitrary Division [1997], 2.2)
     A reaction: The whiteness also stays the same, but isn't stuff.
10. Modality / E. Possible worlds / 3. Transworld Objects / b. Rigid designation
A logically determinate name names the same thing in every possible world [Lewis]
     Full Idea: A logically determinate name is one which names the same thing in every possible world.
     From: David Lewis (How to Define Theoretical Terms [1970], III)
     A reaction: This appears to be rigid designation, before Kripke introduced the new word.
11. Knowledge Aims / B. Certain Knowledge / 2. Common Sense Certainty
Commitment to 'I have a hand' only makes sense in a context where it has been doubted [Hawthorne]
     Full Idea: If I utter 'I know I have a hand' then I can only be reckoned a cooperative conversant by my interlocutors on the assumption that there was a real question as to whether I have a hand.
     From: John Hawthorne (The Case for Closure [2005], 2)
     A reaction: This seems to point to the contextualist approach to global scepticism, which concerns whether we are setting the bar high or low for 'knowledge'.
13. Knowledge Criteria / A. Justification Problems / 2. Justification Challenges / c. Knowledge closure
How can we know the heavyweight implications of normal knowledge? Must we distort 'knowledge'? [Hawthorne]
     Full Idea: Those who deny skepticism but accept closure will have to explain how we know the various 'heavyweight' skeptical hypotheses to be false. Do we then twist the concept of knowledge to fit the twin desiderata of closue and anti-skepticism?
     From: John Hawthorne (The Case for Closure [2005], Intro)
     A reaction: [He is giving Dretske's view; Dretske says we do twist knowledge] Thus if I remember yesterday, that has the heavyweight implication that the past is real. Hawthorne nicely summarises why closure produces a philosophical problem.
We wouldn't know the logical implications of our knowledge if small risks added up to big risks [Hawthorne]
     Full Idea: Maybe one cannot know the logical consequences of the proposition that one knows, on account of the fact that small risks add up to big risks.
     From: John Hawthorne (The Case for Closure [2005], 1)
     A reaction: The idea of closure is that the new knowledge has the certainty of logic, and each step is accepted. An array of receding propositions can lose reliability, but that shouldn't apply to logic implications. Assuming monotonic logic, of course.
Denying closure is denying we know P when we know P and Q, which is absurd in simple cases [Hawthorne]
     Full Idea: How could we know that P and Q but not be in a position to know that P (as deniers of closure must say)? If my glass is full of wine, we know 'g is full of wine, and not full of non-wine'. How can we deny that we know it is not full of non-wine?
     From: John Hawthorne (The Case for Closure [2005], 2)
     A reaction: Hawthorne merely raises this doubt. Dretske is concerned with heavyweight implications, but how do you accept lightweight implications like this one, and then suddenly reject them when they become too heavy? [see p.49]
14. Science / B. Scientific Theories / 8. Ramsey Sentences
A Ramsey sentence just asserts that a theory can be realised, without saying by what [Lewis]
     Full Idea: If we specify a theory with all of its terms, and then replace all of those terms with variables, we can then say that some n-tuples of entities can satisfy this formula. This Ramsey sentence then says the theory is realised, without specifying by what.
     From: David Lewis (How to Define Theoretical Terms [1970], II)
     A reaction: [I have compressed Lewis, and cut out the symbolism]
There is a method for defining new scientific terms just using the terms we already understand [Lewis]
     Full Idea: I contend that there is a general method for defining newly introduced terms in a scientific theory, one which uses only the old terms we understood beforehand.
     From: David Lewis (How to Define Theoretical Terms [1970], Intro)
     A reaction: Lewis is game is to provide bridge laws for a reductive account of nature, without having to introduce something entirely new to achieve it. The idea of bridge laws in scientific theory is less in favour these days.
It is better to have one realisation of a theory than many - but it may not always be possible [Lewis]
     Full Idea: A uniquely realised theory is, other things being equal, certainly more satisfactory than a multiply realised theory. We should insist on unique realisation as a standard of correctness unless it is a standard too high to be met.
     From: David Lewis (How to Define Theoretical Terms [1970], III)
     A reaction: The point is that rewriting a theory as Ramsey sentences just says there is at least one realisation, and so it doesn't meet the highest standards for scientific theories. The influence of set-theoretic model theory is obvious in this approach.
The Ramsey sentence of a theory says that it has at least one realisation [Lewis]
     Full Idea: The Ramsey sentence of a theory says that it has at least one realisation.
     From: David Lewis (How to Define Theoretical Terms [1970], V)