Squeezing Backbone Feature Distributions to the Max for Efficient Few-Shot Learning - Ecole Nationale d'Ingénieurs de Brest Accéder directement au contenu
Article Dans Une Revue Algorithms Année : 2022

Squeezing Backbone Feature Distributions to the Max for Efficient Few-Shot Learning

Résumé

In many real-life problems, it is difficult to acquire or label large amounts of data, resulting in so-called few-shot learning problems. However, few-shot classification is a challenging problem due to the uncertainty caused by using few labeled samples. In the past few years, many methods have been proposed with the common aim of transferring knowledge acquired on a previously solved task, which is often achieved by using a pretrained feature extractor. As such, if the initial task contains many labeled samples, it is possible to circumvent the limitations of few-shot learning. A shortcoming of existing methods is that they often require priors about the data distribution, such as the balance between considered classes. In this paper, we propose a novel transfer-based method with a double aim: providing state-of-the-art performance, as reported on standardized datasets in the field of few-shot learning, while not requiring such restrictive priors. Our methodology is able to cope with both inductive cases, where prediction is performed on test samples independently from each other, and transductive cases, where a joint (batch) prediction is performed.
Fichier principal
Vignette du fichier
algorithms-15-00147.pdf (1.3 Mo) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03675145 , version 1 (24-05-2022)

Licence

Paternité

Identifiants

Citer

Yuqing Hu, Stéphane Pateux, Vincent Gripon. Squeezing Backbone Feature Distributions to the Max for Efficient Few-Shot Learning. Algorithms, 2022, 15 (5), pp.147. ⟨10.3390/a15050147⟩. ⟨hal-03675145⟩
44 Consultations
39 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More