Episoder
-
Steve Awodey (CMU/MCMP) gives a talk at the MCMP Workshop on Modality titled "Modality and Categories".
-
Peter Verdée (Ghent) gives a talk at the MCMP Colloquium (8 Feb, 2012) titled "Adaptive Logics: Introduction, Applications, Computational Aspects and Recent Developments". Abstract: Peter Verd ́ee ([email protected]) Centre for Logic and Philosophy of Science Ghent University, Belgium In this talk I give a thorough introduction to adaptive logics (cf. [1, 2, 3]). Adaptive logics are first devised by Diderik Batens and are now the main research area of the logicians in the Centre for Logic and Philosophy of Science in Ghent. First I explain the main purpose of adaptive logics: formalizing defea- sible reasoning in a unified way aiming at a normative account of fallible rationality. I give an informal characterization of what we mean by the notion ‘defeasible reasoning’ and explain why it is useful and interesting to formalize this type of reasoning by means of logics. Then I present the technical machinery of the so called standard format of adaptive logics. The standard format is a general way to define adaptive logics from three basic variables. Most existing adaptive logics can be defined within this format. It immediately provides the logics with a dynamic proof theory, a selection semantics and a number of important meta-theoretic properties. I proceed by giving some popular concrete examples of adaptive logics in standard form. I quickly introduce inconsistency adaptive logics, adap- tive logics for induction and adaptive logics for reasoning with plausible knowledge/beliefs.
Next I present some computational results on adaptive logics. The adap- tive consequence relation are in general rather complex (I proved that there are recursive premise sets such that their adaptive consequence sets are Π1- complex – cf. [4]). However, I argue that this does not harm the naturalistic aims of adaptive logics, given a specific view on the relation between actual reasoning and adaptive logics. Finally, two interesting recent developments are presented: (1) Lexi- cographic adaptive logics. They fall outside of the scope of the standard format, but have similar properties and are able to handle prioritized infor- mation. (2) Adaptive set theories. Such theories start form the unrestricted comprehension axiom scheme but are strong enough to serve as a foundation for an interesting part of classical mathematics, by treating the paradoxes in a novel, defeasible way. -
Manglende episoder?
-
Sonja Smets (University of Groningen) gives a talk at the MCMP Colloquium titled "Belief Dynamics under Iterated Revision: Cycles, Fixed Points and Truth-tracking". Abstract: We investigate the long-term behavior of processes of learning by iterated belief-revision with new truthful information. In the case of higher-order doxastic sentences, the iterated revision can even be induced by repeated learning of the same sentence (which conveys new truths at each stage by referring to the agent's own current beliefs at that stage). For a number of belief-revision methods (conditioning, lexicographic revision and minimal revision), we investigate the conditions in which iterated belief revision with truthful information stabilizes: while the process of model-changing by iterated conditioning always leads eventually to a fixed point (and hence all doxastic attitudes, including conditional beliefs, strong beliefs, and any form of "knowledge", eventually stabilize), this is not the case for other belief-revision methods. We show that infinite revision cycles exist (even when the initial model is finite and even when in the case of repeated revision with one single true sentence), but we also give syntactic and semantic conditions ensuring that beliefs stabilize in the limit. Finally, we look at the issue of convergence to truth, giving both sufficient conditions ensuring that revision stabilizes on true beliefs, and (stronger) conditions ensuring that the process stabilizes on "full truth" (i.e. beliefs that are both true and complete). This talk is based on joint work with A. Baltag.
-
Alexandru Baltag (ILLC Amsterdam) gives a talk at the MCMP Colloquium titled "Tracking the Truth Requires a Non-wellfounded Prior! A Study in the Learning Power (and Limits) of Bayesian (and Qualitative) Update". Abstract: The talk is about tracking "full truth" in the limit by iterated belief updates. Unlike Sonja's talk (which focused on finite models), we now allow the initial model (and thus the initial set of epistemic possibilities) to be infinite. We compare the truth-tracking power of various belief-revision methods, including probabilistic conditioning (also known as Bayesian update) and some of its qualitative, "plausibilistic" analogues (conditioning, lexicographic revision, minimal revision). We focus in particular on the question on whether any of these methods is "universal" (i.e. as good at tracking the truth as any other learning method). We show that this is not the case, as long as we keep the standard probabilistic (or belief-revision) setting. On the positive side, we show that if we consider appropriate generalizations of conditioning in a non-standard, non-wellfounded setting, then universality is achieved for some (though not all) of these learning methods. In the qualitative case, this means that we need to allow the prior plausibility relation to be a non-wellfounded (though total) preorder. In the probabilistic case, this means moving to a generalized conditional probability setting, in which the family of "cores" (or "strong beliefs") may be non-wellfounded (when ordered by inclusion or logical entailament). As a consequence, neither the family of classical probability spaces, nor lexicographic probability spaces, and not even the family of all countably additive (conditional) probability spaces, are rich enough to make Bayesian conditioning "universal", from a Learning Theoretic point of view! This talk is based on joint work with Nina Gierasimczuk and Sonja Smets.
-
Richard Pettigrew (University of Bristol) gives a talk at the MCMP Colloquium titled "Accuracy, Chance, and the Principal Principle"
-
Douglas Patterson (Universität Leipzig) gives a talk at the MCMP Colloquium titled "Theory and Concept in Tarski's Philosophy of Language". Abstract: In this talk I will set out some of the background of Tarski's famous work on truth and semantics by looking at important views of his teachers Tadeusz Kotarbinski and Stanislaw Lesniewski in the philosophy of langauge and the "methodology of deductive sciences". With the understanding of the assumed philosophy of language and logic of the important articles set out in this manner, I will look at a number of issues familiar from the literature. I will sort out Tarski's conception of "material adequacy", discuss the relationship between a Tarskian definition of truth and a conceptual analysis of a more familiar sort, and consider the consequences of the views presented for the question of whether Tarski was a deflationist or a correspondence theorist.
-
Catarina Duthil-Novaes (ILLC/Amsterdam) gives a talk at the MCMP Colloquium titled "The 'fitting problem' for logical semantic systems". Abstract: When applying logical tools to study a given extra-theoretical, informal phenomenon, it is now customary to design a deductive system, and a semantic system based on a class of mathematical structures. The assumption seems to be that they would each capture specific aspects of the target phenomenon. Kreisel has famously offered an argument on how, if there is a proof of completeness for the deductive system with respect to the semantic system, the target phenomenon becomes „squeezed“ between the extension of the two, thus ensuring the extensional adequacy of the technical apparatuses with respect to the target phenomenon: the so-called squeezing argument. However, besides a proof of completeness, for the squeezing argument to go through, two premises must obtain (for a fact e occurring within the range of the target phenomenon):(1) If e is the case according to the deductive system, then e is the case according to the target phenomenon.(2) If e is the case according to the target phenomenon, then e is the case according to the semantic system.In other words, the semantic system would provide the necessary conditions for e to be the case according to the target phenomenon, while the deductive system would provide the relevant sufficient conditions. But clearly, both (1) and (2) rely crucially on the intuitive adequacy of the deductive and the semantic systems for the target phenomenon.In my talk, I focus on the (in)plausibility of instances of (2), and argue That the adequacy of a semantic system for a given target phenomenon must not be taken for granted. In particular, I discuss the results presented in (Andrade-Lotero & Dutilh Novaes forthcoming) on multiple semantic systems for Aristotelian syllogistic, which are all sound and complete with respect to a reasonable deductive system for syllogistic (Corcoran˙s system D), but which are not extensionally equivalent; indeed, as soon as the language is enriched, they start disagreeing with each other as to which syllogistic arguments (in the enriched language) are valid. A plurality of apparently adequate semantic systems for a given target phenomenon brings to the fore what I describe as the „fitting problem“ for logical semantic systems: what is to guarantee that these technical apparatuses adequately capture significant aspects of the target phenomenon? If the different candidates have strikingly different properties (as is the case here), then they cannot all be adequate semantic systems for the target phenomenon. More generally, the analysis illustrates the need for criteria of adequacy for semantic systems based on mathematical structures. Moreover, taking Aristotelian syllogistic as a case study illustrates the fruitfulness but also the complexity of employing logical tools in historical analyses.
-
Catarina Duthil-Novaes (ILLC/Amsterdam) gives at talk at the MCMP Colloquium titled "Cognitive motivations for treating formalisms as calculi". Abstract: In The Logical Syntax of Language, Carnap famously recommended that logical languages be treated as mere calculi, and that their symbols be viewed as meaningless; reasoning with the system is to be guided solely on the basis of its rules of transformation. Carnap˙s main motivation for this recommendation seems to be related to a concern with precision and exactness.
In my talk, I argue that Carnap was right in insisting on the benefits of treating logical formalisms as calculi, but he was wrong in thinking that enhanced precision is the main advantage of this approach. Instead, I argue that a deeper impact of treating formalisms as calculi is of a cognitive nature: by adopting this stance, the reasoner is able to counter some of her „default“ reasoning tendencies, which (although advantageous in most practical situations) may hinder the discovery of novel facts in scientific contexts. One of these cognitive tendencies is the constant search for confirmation for the beliefs one already holds, as extensively documented and studied in the psychology of reasoning literature, and often referred to as confirmation bias/belief bias.
Treating formalisms as meaningless and relying on their well-defined rules of formation and transformation allows the reasoner to counter her own belief bias for two main reasons: it 'switches off' semantic activation, which is thought to be a largely automatic cognitive process, and it externalizes reasoning processes; they now take place largely through the manipulation of the notation. I argue moreover that the manipulation of the notation engages predominantly sensorimotor processes rather than being carried out internally: the agent is literally 'thinking on the paper'.
The analysis relies heavily on empirical data from psychology and cognitive sciences, and is largely inspired by recent literature on extended cognition (in particular Clark, Menary and Sutton). If I am right, formal languages treated as calculi and viewed as external cognitive artifacts offer a crucial cognitive boost to human agents, in particular in that they seem to produce a beneficial de-biasing effect. -
Stephan Hartmann (Tilburg) gives a talk at the MCMP Workshop on Computational Metaphysics titled "On the Emergence of Descriptive Norms".
-
Graciela di Pierris (Stanford) gives a talk at the MCMP Colloquium titled "Hume on Space and Geometry". Abstract: Hume’s discussion of space, time, and mathematics in Part II of Book I of theTreatise has appeared to many commentators as one of the weakest parts of his work.I argue, on the contrary, that Hume’s views on space and geometry are deeplyconnected with his radically empiricist reliance on phenomenologically given sensoryimages. He insightfully shows that, working within this epistemological model, wecannot attain complete certainty about the continuum but only at most about discretequantity. Therefore, geometry, in contrast to arithmetic, cannot be a fully exactscience. Nevertheless, Hume does have an illuminating account of Euclid’s geometryas an axiomatic demonstrative science, ultimately based on the phenomenologicalapprehension of the “easiest and least deceitful” sensory images of geometricalfigures. Hume’s discussion, in my view, demonstrates the severe limitations of apurely empiricist interpretation of the role of such figures (diagrams) in geometry.
-
Branden Fitelson (Rutgers University) gives a talk at the MCMP Workshop on Computational Metaphysics titled "Russellian Descriptions & Gibbardian Indicatives (Two Case Studies Involving Automated Reasoning)". Abstract: The first part of this talk (which is joint work with Paul Oppenheimer) will be about the perils of representing claims involving Russellian definite descriptions in an "automated reasoning friendly" way. I will explain how to eliminate Russellian descriptions, so as to yield logically equivalent (and automated reasoning friendly) statements. This is a special case of a more general problem -- which is representing philosophical theories/explications in a way that automated reasoning tools can understand. The second part of the talk shows how automated reasoning tools can be useful in clarifying the structure (and requisite presuppositions) of well-known philosophical "theorems". Here, the example comes from the philosophy of language, and it involves a certain "triviality result" or "collapse theorem" for the indicative conditional that was first discussed by Gibbard. I show how one can use automated reasoning tools to provide a precise, formal rendition of Gibbard's "theorem". This turns out to be rather revealing about what is (and is not) essential to Gibbard's argument.
-
Hannes Leitgeb (MCMP/LMU) gives a lecture at the Carl-Friedrich-von-Siemens-Stiftung titled "Logic and the Brain". Introductory words by Enno Aufderheide (secretary gemeral, Humboldt Foundation).
-
Branden Fitelson (Rutgers University) gives a talk at the MCMP Workshop on Bayesian Methods in Philosophy titled "Accuracy & Coherence". Abstract: In this talk, I will explore a new way of thinking about the relationship between accuracy norms and coherence norms in epistemology (generally). In the first part of the talk, I will apply the basic ideas to qualitative judgments (belief and disbelief). This will lead to an interesting coherence norm for qualitative judgments (but one which is weaker than classical deductive consistency). In the second part of the talk, I will explain how the approach can be applied to comparative confidence judgments. Again, this will lead to coherence norms that are weaker than classical (comparative probabilistic) coherence norms. Along the way, I will explain how evidential norms can come into conflict with even the weaker coherence norms suggested by our approach.
-
Hannes Leitgeb (MCMP/LMU) gives a talk at the MCMP Workshop on Bayesian Methods in Philosophy titled "The Lockean Thesis Revisited".
- Vis mere