Estimating a Kernel Fisher Discriminant in the Presence of Label Noise

Neil D. LawrenceBernhard Schölkopf
Proceedings of the International Conference in Machine Learning, Morgan Kauffman 18, 2001.

Abstract

Data noise is present in many machine learning problems domains, some of these are well studied but others have received less attention. In this paper we propose an algorithm for constructing a kernel Fisher discriminant (KFD) from training examples with *noisy labels*. The approach allows to associate with each example a probability of the label being flipped. We utilise an expectation maximization (EM) algorithm for updating the probabilities. The E-step uses class conditional probabilities estimated as a by-product of the KFD algorithm. The M-step updates the flip probabilities and determines the parameters of the discriminant. We have applied the approach to two real-world data-sets. The results show the feasibility of the approach.

Cite this Paper


BibTeX
@InProceedings{Lawrence:noisy01, title = {Estimating a Kernel {F}isher Discriminant in the Presence of Label Noise}, author = {Lawrence, Neil D. and Schölkopf, Bernhard}, booktitle = {Proceedings of the International Conference in Machine Learning}, year = {2001}, editor = {Brodley, Carla and Danyluk, Andrea P.}, volume = {18}, address = {San Francisco, CA}, publisher = {Morgan Kauffman}, pdf = {https://inverseprobability.com/publications/files/noisyfisher.pdf}, url = {http://inverseprobability.com/publications/lawrence-noisy01.html}, abstract = {Data noise is present in many machine learning problems domains, some of these are well studied but others have received less attention. In this paper we propose an algorithm for constructing a kernel Fisher discriminant (KFD) from training examples with *noisy labels*. The approach allows to associate with each example a probability of the label being flipped. We utilise an expectation maximization (EM) algorithm for updating the probabilities. The E-step uses class conditional probabilities estimated as a by-product of the KFD algorithm. The M-step updates the flip probabilities and determines the parameters of the discriminant. We have applied the approach to two real-world data-sets. The results show the feasibility of the approach.} }
Endnote
%0 Conference Paper %T Estimating a Kernel Fisher Discriminant in the Presence of Label Noise %A Neil D. Lawrence %A Bernhard Schölkopf %B Proceedings of the International Conference in Machine Learning %D 2001 %E Carla Brodley %E Andrea P. Danyluk %F Lawrence:noisy01 %I Morgan Kauffman %U http://inverseprobability.com/publications/lawrence-noisy01.html %V 18 %X Data noise is present in many machine learning problems domains, some of these are well studied but others have received less attention. In this paper we propose an algorithm for constructing a kernel Fisher discriminant (KFD) from training examples with *noisy labels*. The approach allows to associate with each example a probability of the label being flipped. We utilise an expectation maximization (EM) algorithm for updating the probabilities. The E-step uses class conditional probabilities estimated as a by-product of the KFD algorithm. The M-step updates the flip probabilities and determines the parameters of the discriminant. We have applied the approach to two real-world data-sets. The results show the feasibility of the approach.
RIS
TY - CPAPER TI - Estimating a Kernel Fisher Discriminant in the Presence of Label Noise AU - Neil D. Lawrence AU - Bernhard Schölkopf BT - Proceedings of the International Conference in Machine Learning DA - 2001/01/01 ED - Carla Brodley ED - Andrea P. Danyluk ID - Lawrence:noisy01 PB - Morgan Kauffman VL - 18 L1 - https://inverseprobability.com/publications/files/noisyfisher.pdf UR - http://inverseprobability.com/publications/lawrence-noisy01.html AB - Data noise is present in many machine learning problems domains, some of these are well studied but others have received less attention. In this paper we propose an algorithm for constructing a kernel Fisher discriminant (KFD) from training examples with *noisy labels*. The approach allows to associate with each example a probability of the label being flipped. We utilise an expectation maximization (EM) algorithm for updating the probabilities. The E-step uses class conditional probabilities estimated as a by-product of the KFD algorithm. The M-step updates the flip probabilities and determines the parameters of the discriminant. We have applied the approach to two real-world data-sets. The results show the feasibility of the approach. ER -
APA
Lawrence, N.D. & Schölkopf, B.. (2001). Estimating a Kernel Fisher Discriminant in the Presence of Label Noise. Proceedings of the International Conference in Machine Learning 18 Available from http://inverseprobability.com/publications/lawrence-noisy01.html.

Related Material