[edit]
Markovian inference in belief networks
, 1998.
Abstract
Bayesian belief networks can represent the complicated probabilistic
processes that form natural sensory inputs. Once the parameters of
the network have been learned,nonlinear inferences about the input
can be made by computing the posterior distribution over the hidden
units (e.g., depth in stereo vision) given the input. Computing the
posterior distribution exactly is not practical in richly-connected
networks, but it turns out that by using a variational (a.k.a., mean
field) method, it is easy to find a product-form distribution that
approximates the true posterior distribution. This approximation
assumes that the hidden variables are independent given the current
input. In this paper, we explore a more powerful variational
technique that models the posterior distribution using a Markov
chain. We compare this method with inference using mean fields and
mixtures of mean fields in randomly generated networks.