Phone+1 (646) 993-8590

Mini Review

Improving triple contrastive learning representation boosting for supervised multiclass classification

Abstract

In recent years, contrastive learning has gained significant attention as a powerful method for training machine learning models, particularly in the domain of unsupervised learning. The basic premise of contrastive learning is to learn representations of data by contrasting similar and dissimilar pairs, thus enhancing the model’s ability to differentiate between various instances. However, its application in supervised multiclass tasks is relatively underexplored, especially when trying to boost the model’s performance by leveraging additional contrastive signals. In this article, we explore the concept of Triple Contrastive Learning Representation Boosting (TCLRB), an advanced approach designed to enhance supervised multiclass classification tasks by leveraging three contrasting components. By combining the strengths of contrastive learning and supervised learning, TCLRB offers a novel framework for improving model accuracy, generalization, and representation learning.

Keywords

Triple contrastive learningTriplet lossAugmentation-level contrastModel accuracyClass level contrastMultimodal applications

Corresponding Author

Bishnukrupa Barik

Department of Computer Science, Institute of Technical Education and Research (ITER) under Siksha 'O' Anusandhan University, Bhubaneswar, Odisha, India

bishnukrupabarik@gmail.com

Article History

Received Date : 17 January 2025

Revised Date : 21 February 2025

Accepted Date : 07 March 2025

Loading publication timeline...

WhatsApp Chat