Posterior probability convergence of k-NN classification and K-means clustering

Heysem Kaya*, Olcay Kurşun, Fikret Gürgen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Centroid based clustering methods, such as K-Means, form Voronoi cells whose radii are inversely proportional to number of clusters, K, and the expectation of posterior probability distribution in the closest cluster is related to that of a k-Nearest Neighbor Classifier (k-NN) due to the Law of Large Numbers. The aim of this study is to examine the relationship of these two seemingly different concepts of clustering and classification, more specifically, the relationship between k of k-NN and K of K-Means. One specific application area of this correspondence is local learning. The study provides experimental convergence evidence and complexity analysis to address the relative advantages of two methods in local learning applications.

Original languageEnglish
Title of host publicationProc. 27th International Symposium on Computer and Information Sciences, ISCIS 2012
Pages171-179
Number of pages9
DOIs
Publication statusPublished - 22 Nov 2013
Event27h International Symposium on Computer and Information Sciences, ISCIS 2012 - Paris, France
Duration: 3 Oct 20124 Oct 2012

Conference

Conference27h International Symposium on Computer and Information Sciences, ISCIS 2012
Country/TerritoryFrance
CityParis
Period3/10/124/10/12

Keywords

  • Clustering
  • K-Means
  • K-Medoids
  • K-NN classification
  • Local learning

Fingerprint

Dive into the research topics of 'Posterior probability convergence of k-NN classification and K-means clustering'. Together they form a unique fingerprint.

Cite this