Topics in Knowledge Representation: Belief Revision and Conditional Knowledge Bases
thesisposted on 18.10.2016, 00:00 by Jonathon Yaggie
Results in two topics within knowledge representation and reasoning are presented. The first, belief revision, concentrates on incorporation of new knowledge into previous knowledge. The objective of research presented herein is to provide a formal logical framework to evaluate whether a finite set of logical statements, postulates, can be used to characterize a class of belief revision operators. A connection between characterizability by postulates and definability in a fragment of second order logic is established. This connection allows tools from finite model theory to be employed to identify classes of belief revision operators which are not characterizable. This framework is developed in Chapter 1 and extended to the case of Horn belief revision in Chapter 2. In addition, examples of characterizable and non-characterizable classes of belief revision operators are given to demonstrate the application of this framework. The second topic is a method for inference on conditional knowledge bases proposed by Kern-Isberner et al. A set of conditional statements with associated probabilities form a conditional knowledge base. However, conditional knowledge bases may not contain enough information to fully determine a probability distribution. Therefore, the maximum entropy principle is applied, allowing probabilities of statements outside the knowledge to be inferred. The technique explored in Chapter 4 represents knowledge bases as polynomials and uses methods from computational algebra for inference on new conditional statements. Several examples and experiments were conducted to determine the viability of using this method for inference with arbitrary conditional knowledge bases. Additionally, explanations regarding the algebraic geometry of these examples are given.