Bayesian Networks: Performance Analysis of Loopy Belief Propagation
thesisposted on 15.04.2014 by Xiangqiong Shi
In order to distinguish essays and pre-prints from academic theses, we have a separate category. These are often much longer text based documents than a paper.
Loopy belief propagation (LBP) is a probability inference algorithm on Graphical Models. It has been successfully applied in low-density parity-check codes, turbo codes, computer vision problems such as stereo matching, image segmentation and image impainting. Belief propagation was first proposed by Judea Pearl in 1982 for tree-structured graphs, and was extended to poly-trees in 1983, and later applied to general graphs with empirical success. Since LBP ignores the loopy structure of graphs and blindly propagates messages, many researchers have focused in analyzing its performance of convergence and accuracy, and presented variations of message passing algorithm. In this thesis, we focus on the performance analysis of LBP with respect to error bound and convergence. We present novel results that show the relationship between the performance of LBP and the strength of potential functions as well as topology structure of graphical models. Specifically, we first present lower-bound and upper-bound on multiplicative message errors. Then we present uniform and non-uniform distance bound on beliefs, which can improve an existing accuracy bound between beliefs and true marginals. Thereafter, we present our non-uniform sufficient convergence condition for LBP, which is shown to be better than existing conditions. We also show our error bound is related with the rate of convergence of LBP. Furthermore, we analyze fixed points of completely uniform binary graph, and present tight distance bound between beliefs. Finally, we implement LBP algorithm for an image segmentation application, and show how the segmentation performance is affected by some parameters in potential functions.