[edit]

# Markovian inference in belief networks

Brendan J. Frey, Neil D. Lawrence, Christopher M. Bishop , 1998.

#### Abstract

Bayesian belief networks can represent the complicated probabilistic processes that form natural sensory inputs. Once the parameters of the network have been learned,nonlinear inferences about the input can be made by computing the posterior distribution over the hidden units (e.g., depth in stereo vision) given the input. Computing the posterior distribution exactly is not practical in richly-connected networks, but it turns out that by using a variational (a.k.a., mean field) method, it is easy to find a product-form distribution that approximates the true posterior distribution. This approximation assumes that the hidden variables are independent given the current input. In this paper, we explore a more powerful variational technique that models the posterior distribution using a Markov chain. We compare this method with inference using mean fields and mixtures of mean fields in randomly generated networks.

#### Cite this Paper

BibTeX

```
@InProceedings{pmlr-v-frey-markovian98,
title = {Markovian inference in belief networks},
author = {Brendan J. Frey and Neil D. Lawrence and Christopher M. Bishop},
year = {},
editor = {},
address = {University of Illinois at Urbana-Champaign, 405 North Mathews Avenue, Urbana, IL 61801, USA},
url = {http://inverseprobability.com/publications/frey-markovian98.html},
abstract = {Bayesian belief networks can represent the complicated probabilistic processes that form natural sensory inputs. Once the parameters of the network have been learned,nonlinear inferences about the input can be made by computing the posterior distribution over the hidden units (e.g., depth in stereo vision) given the input. Computing the posterior distribution exactly is not practical in richly-connected networks, but it turns out that by using a variational (a.k.a., mean field) method, it is easy to find a product-form distribution that approximates the true posterior distribution. This approximation assumes that the hidden variables are independent given the current input. In this paper, we explore a more powerful variational technique that models the posterior distribution using a Markov chain. We compare this method with inference using mean fields and mixtures of mean fields in randomly generated networks.}
}
```

Endnote

```
%0 Conference Paper
%T Markovian inference in belief networks
%A Brendan J. Frey
%A Neil D. Lawrence
%A Christopher M. Bishop
%B
%C Proceedings of Machine Learning Research
%D
%E
%F pmlr-v-frey-markovian98
%I PMLR
%J Proceedings of Machine Learning Research
%P --
%U http://inverseprobability.com
%V
%W PMLR
%X Bayesian belief networks can represent the complicated probabilistic processes that form natural sensory inputs. Once the parameters of the network have been learned,nonlinear inferences about the input can be made by computing the posterior distribution over the hidden units (e.g., depth in stereo vision) given the input. Computing the posterior distribution exactly is not practical in richly-connected networks, but it turns out that by using a variational (a.k.a., mean field) method, it is easy to find a product-form distribution that approximates the true posterior distribution. This approximation assumes that the hidden variables are independent given the current input. In this paper, we explore a more powerful variational technique that models the posterior distribution using a Markov chain. We compare this method with inference using mean fields and mixtures of mean fields in randomly generated networks.
```

RIS

```
TY - CPAPER
TI - Markovian inference in belief networks
AU - Brendan J. Frey
AU - Neil D. Lawrence
AU - Christopher M. Bishop
BT -
PY -
DA -
ED -
ID - pmlr-v-frey-markovian98
PB - PMLR
SP -
DP - PMLR
EP -
L1 -
UR - http://inverseprobability.com/publications/frey-markovian98.html
AB - Bayesian belief networks can represent the complicated probabilistic processes that form natural sensory inputs. Once the parameters of the network have been learned,nonlinear inferences about the input can be made by computing the posterior distribution over the hidden units (e.g., depth in stereo vision) given the input. Computing the posterior distribution exactly is not practical in richly-connected networks, but it turns out that by using a variational (a.k.a., mean field) method, it is easy to find a product-form distribution that approximates the true posterior distribution. This approximation assumes that the hidden variables are independent given the current input. In this paper, we explore a more powerful variational technique that models the posterior distribution using a Markov chain. We compare this method with inference using mean fields and mixtures of mean fields in randomly generated networks.
ER -
```

APA

`Frey, B.J., Lawrence, N.D. & Bishop, C.M.. (). Markovian inference in belief networks. `*, in PMLR* :-

#### Related Material

BibTeX

```
@InProceedings{/frey-markovian98,
title = {Markovian inference in belief networks},
author = {Brendan J. Frey and Neil D. Lawrence and Christopher M. Bishop},
year = {},
editor = {},
address = {University of Illinois at Urbana-Champaign, 405 North Mathews Avenue, Urbana, IL 61801, USA},
url = {http://inverseprobability.com/publications/frey-markovian98.html},
abstract = {Bayesian belief networks can represent the complicated probabilistic processes that form natural sensory inputs. Once the parameters of the network have been learned,nonlinear inferences about the input can be made by computing the posterior distribution over the hidden units (e.g., depth in stereo vision) given the input. Computing the posterior distribution exactly is not practical in richly-connected networks, but it turns out that by using a variational (a.k.a., mean field) method, it is easy to find a product-form distribution that approximates the true posterior distribution. This approximation assumes that the hidden variables are independent given the current input. In this paper, we explore a more powerful variational technique that models the posterior distribution using a Markov chain. We compare this method with inference using mean fields and mixtures of mean fields in randomly generated networks.}
}
```

Endnote

```
%0 Conference Paper
%T Markovian inference in belief networks
%A Brendan J. Frey
%A Neil D. Lawrence
%A Christopher M. Bishop
%B
%C Proceedings of Machine Learning Research
%D
%E
%F /frey-markovian98
%I PMLR
%J Proceedings of Machine Learning Research
%P --
%U http://inverseprobability.com
%V
%W PMLR
%X Bayesian belief networks can represent the complicated probabilistic processes that form natural sensory inputs. Once the parameters of the network have been learned,nonlinear inferences about the input can be made by computing the posterior distribution over the hidden units (e.g., depth in stereo vision) given the input. Computing the posterior distribution exactly is not practical in richly-connected networks, but it turns out that by using a variational (a.k.a., mean field) method, it is easy to find a product-form distribution that approximates the true posterior distribution. This approximation assumes that the hidden variables are independent given the current input. In this paper, we explore a more powerful variational technique that models the posterior distribution using a Markov chain. We compare this method with inference using mean fields and mixtures of mean fields in randomly generated networks.
```

RIS

```
TY - CPAPER
TI - Markovian inference in belief networks
AU - Brendan J. Frey
AU - Neil D. Lawrence
AU - Christopher M. Bishop
BT -
PY -
DA -
ED -
ID - /frey-markovian98
PB - PMLR
SP -
DP - PMLR
EP -
L1 -
UR - http://inverseprobability.com/publications/frey-markovian98.html
AB - Bayesian belief networks can represent the complicated probabilistic processes that form natural sensory inputs. Once the parameters of the network have been learned,nonlinear inferences about the input can be made by computing the posterior distribution over the hidden units (e.g., depth in stereo vision) given the input. Computing the posterior distribution exactly is not practical in richly-connected networks, but it turns out that by using a variational (a.k.a., mean field) method, it is easy to find a product-form distribution that approximates the true posterior distribution. This approximation assumes that the hidden variables are independent given the current input. In this paper, we explore a more powerful variational technique that models the posterior distribution using a Markov chain. We compare this method with inference using mean fields and mixtures of mean fields in randomly generated networks.
ER -
```

APA

`Frey, B.J., Lawrence, N.D. & Bishop, C.M.. (). Markovian inference in belief networks. `*, in PMLR* :-