Differentially Private k-Means Clustering with Convergence Guarantee

Zhigang Lu, Hong Shen

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)


Iterative clustering around representative points is an effective technique for clustering and helps us learn insights behind data to support various important applications. Unfortunately, it also provides security holes which may allow adversaries to infer the privacy of individuals with some background knowledge. To protect individual privacy against such inference attacks, preserving differential privacy for iterative clustering algorithms has been extensively studied. Existing differentially private clustering algorithms adopt the same framework to compute differentially private centroids iteratively by running Lloyd's kk-means algorithm to obtain the actual centroids, then perturbing them with a differential privacy mechanism. These algorithms suffer from the problem of no convergence guarantee, i.e., they provide no guarantee of termination at a solution of Lloyd's algorithm within a bounded number of iterations. This problem severely impacts their clustering quality and execution efficiency. To address this problem, this article follows the same centroid updating pattern as existing work in interactive settings; however we propose a novel framework for injecting differential privacy into the actual centroids. Specifically, to ensure convergence, we maintain the perturbed centroids of the previous iteration t-1t-1 to compute a convergence zone for each cluster in the current iteration tt, where we inject differential privacy noise. To achieve a satisfactory convergence rate, we further control the orientation of centroid movement in each cluster using two strategies: one takes the orientation of centroid movement from iteration t-1t-1 to iteration tt (past knowledge); the other uses the additional information of the orientation from iteration tt to iteration t+1t+1 (future knowledge). We prove that, in the expected case, our algorithm (in both strategies) converges to a solution of Lloyd's algorithm in at most twice as many iterations as Lloyd's algorithm. Furthermore, when using both past and future knowledge, we prove that our algorithm converges to the same solution as Lloyd's algorithm (for the same initial centroids) with high probability, at the cost of a slower convergence speed compared to using only past knowledge due to duplicated operations in each iteration required for computing the future knowledge. We perform experimental evaluations on seven widely used real-world datasets. The experimental results show that our algorithm outperforms the state-of-the-art methods for interactive differentially private clustering with a guaranteed convergence and better clustering quality whilst meeting the same differential privacy requirements.

Original languageEnglish
Article number9286725
Pages (from-to)1541-1552
Number of pages12
JournalIEEE Transactions on Dependable and Secure Computing
Issue number4
Publication statusPublished - 1 Jul 2021
Externally publishedYes


  • Differential privacy
  • k-Means clustering
  • machine learning


Dive into the research topics of 'Differentially Private k-Means Clustering with Convergence Guarantee'. Together they form a unique fingerprint.

Cite this