Variational Learning for Multi-layer networks of Linear Threshold Units

Neil D. Lawrence
Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics, Morgan Kauffman :245-252, 2001.

Abstract

Linear threshold units were originally proposed as models of biological neurons. They were widely studied in the context of the perceptron @Rosenblatt:book62. Due to the difficulties of finding a general algorithm for networks with hidden nodes, they never passed into general use. We derive an algorithm in the context of graphical models and show how it may be applied in multi-layer networks of linear threshold units. We demonstrate the algorithm through three well known datasets.

Cite this Paper


BibTeX
@InProceedings{Lawrence:ltu01, title = {Variational Learning for Multi-layer networks of Linear Threshold Units}, author = {Lawrence, Neil D.}, booktitle = {Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics}, pages = {245--252}, year = {2001}, editor = {Jaakkola, Tommi S. and Richardson, Thomas S.}, address = {San Francisco, CA}, publisher = {Morgan Kauffman}, url = {http://inverseprobability.com/publications/lawrence-ltu01.html}, abstract = {Linear threshold units were originally proposed as models of biological neurons. They were widely studied in the context of the perceptron @Rosenblatt:book62. Due to the difficulties of finding a general algorithm for networks with hidden nodes, they never passed into general use. We derive an algorithm in the context of graphical models and show how it may be applied in multi-layer networks of linear threshold units. We demonstrate the algorithm through three well known datasets.} }
Endnote
%0 Conference Paper %T Variational Learning for Multi-layer networks of Linear Threshold Units %A Neil D. Lawrence %B Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics %D 2001 %E Tommi S. Jaakkola %E Thomas S. Richardson %F Lawrence:ltu01 %I Morgan Kauffman %P 245--252 %U http://inverseprobability.com/publications/lawrence-ltu01.html %X Linear threshold units were originally proposed as models of biological neurons. They were widely studied in the context of the perceptron @Rosenblatt:book62. Due to the difficulties of finding a general algorithm for networks with hidden nodes, they never passed into general use. We derive an algorithm in the context of graphical models and show how it may be applied in multi-layer networks of linear threshold units. We demonstrate the algorithm through three well known datasets.
RIS
TY - CPAPER TI - Variational Learning for Multi-layer networks of Linear Threshold Units AU - Neil D. Lawrence BT - Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics DA - 2001/01/01 ED - Tommi S. Jaakkola ED - Thomas S. Richardson ID - Lawrence:ltu01 PB - Morgan Kauffman SP - 245 EP - 252 UR - http://inverseprobability.com/publications/lawrence-ltu01.html AB - Linear threshold units were originally proposed as models of biological neurons. They were widely studied in the context of the perceptron @Rosenblatt:book62. Due to the difficulties of finding a general algorithm for networks with hidden nodes, they never passed into general use. We derive an algorithm in the context of graphical models and show how it may be applied in multi-layer networks of linear threshold units. We demonstrate the algorithm through three well known datasets. ER -
APA
Lawrence, N.D.. (2001). Variational Learning for Multi-layer networks of Linear Threshold Units. Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics:245-252 Available from http://inverseprobability.com/publications/lawrence-ltu01.html.

Related Material