0 Datasets
0 Files
Get instant academic access to this publication’s datasets.
Yes. After verification, you can browse and download datasets at no cost. Some premium assets may require author approval.
Files are stored on encrypted storage. Access is restricted to verified users and all downloads are logged.
Yes, message the author after sign-up to request supplementary files or replication code.
Join 50,000+ researchers worldwide. Get instant access to peer-reviewed datasets, advanced analytics, and global collaboration tools.
✓ Immediate verification • ✓ Free institutional access • ✓ Global collaborationJoin our academic network to download verified datasets and collaborate with researchers worldwide.
Get Free AccessApproximate computation has emerged as a promising alternative to accurate computation, particularly for applications that can tolerate some degree of error without significant degradation of the output quality. This work analyzes the application of approximate computing for machine learning, specifically focusing on k-means clustering, one of the more widely used unsupervised machine learning algorithms. The k-means algorithm partitions data into k clusters, where k also denotes the number of centroids, with each centroid representing the center of a cluster. The clustering process involves assigning each data point to the nearest centroid by minimizing the within-cluster sum of squares (WCSS), a key metric used to evaluate clustering quality. A lower WCSS value signifies better clustering. Conventionally, WCSS is computed with high precision using an accurate adder. In this paper, we investigate the impact of employing various approximate adders for WCSS computation and compare their results against those obtained with an accurate adder. Further, we propose a new approximate adder (NAA) in this paper. To assess its effectiveness, we utilize it for the k-means clustering of some publicly available artificial datasets with varying levels of complexity, and compare its performance with the accurate adder and many other approximate adders. The experimental results confirm the efficacy of NAA in clustering, as NAA yields WCSS values that closely match or are identical to those obtained using the accurate adder. We also implemented hardware designs of accurate and approximate adders using a 28 nm CMOS standard cell library. The design metrics estimated show that NAA achieves a 37% reduction in delay, a 22% reduction in area, and a 31% reduction in power compared to the accurate adder. In terms of the power-delay product that serves as a representative metric for energy efficiency, NAA reports a 57% reduction compared to the accurate adder. In terms of the area-delay product that serves as a representative metric for design efficiency, NAA reports a 51% reduction compared to the accurate adder. NAA also outperforms several existing approximate adders in terms of design metrics while preserving clustering effectiveness.
Padmanabhan Balasubramanian, Syed Mohammed Mosayeeb Al Hady Zaheen, Douglas L. Maskell (2025). Machine Learning Using Approximate Computing. Journal of Low Power Electronics and Applications, 15(2), pp. 21-21, DOI: 10.3390/jlpea15020021.
Datasets shared by verified academics with rich metadata and previews.
Authors choose access levels; downloads are logged for transparency.
Students and faculty get instant access after verification.
Type
Article
Year
2025
Authors
3
Datasets
0
Total Files
0
Language
English
Journal
Journal of Low Power Electronics and Applications
DOI
10.3390/jlpea15020021
Access datasets from 50,000+ researchers worldwide with institutional verification.
Get Free Access