Show simple item record

dc.contributor.authorİsmailoğlu, Fırat
dc.date.accessioned2024-03-04T13:25:14Z
dc.date.available2024-03-04T13:25:14Z
dc.date.issued23.01.2023tr
dc.identifier.urihttps://hdl.handle.net/20.500.12418/14624
dc.description.abstractIn image classification, there are no labeled training instances for some classes, which are therefore called unseen classes or test classes. To classify these classes, zero-shot learning (ZSL) was developed, which typically attempts to learn a mapping from the (visual) feature space to the semantic space in which the classes are represented by a list of semantically meaningful attributes. However, the fact that this mapping is learned without using instances of the test classes affects the performance of ZSL, which is known as the domain shift problem. In this study, we propose to apply the learning vector quantization (LVQ) algorithm in the semantic space once the mapping is determined. First and foremost, this allows us to refine the prototypes of the test classes with respect to the learned mapping, which reduces the effects of the domain shift problem. Secondly, the LVQ algorithm increases the margin of the 1-NN classifier used in ZSL, resulting in better classification. Moreover, for this work, we consider a range of LVQ algorithms, from initial to advanced variants, and applied them to a number of state-of-the-art ZSL methods, then obtained their LVQ extensions. The experiments based on five ZSL benchmark datasets showed that the LVQ-empowered extensions of the ZSL methods are superior to their original counterparts in almost all settings.tr
dc.language.isoengtr
dc.publisherTubitak Academic Journalstr
dc.relation.isversionof10.55730/1300-0632.3980tr
dc.rightsinfo:eu-repo/semantics/openAccesstr
dc.subjectZero-shot learningtr
dc.subjectVector quantizationtr
dc.subjectImage classificationtr
dc.subjectPrototype learningtr
dc.subjectLarge margin classifierstr
dc.titleLVQ Treatment for Zero-Shot Learningtr
dc.typearticletr
dc.relation.journalTurkish Journal of Electrical Engineering and Computer Sciencestr
dc.contributor.departmentMühendislik Fakültesitr
dc.contributor.authorIDhttps://orcid.org/0000-0002-6680-7291tr
dc.identifier.volume31tr
dc.identifier.issue1tr
dc.identifier.startpageAbstract: In image classification, there are no labeled training instances for some classes, which are therefore called unseen classes or test classes. To classify these classes, zero-shot learning (ZSL) was developed, which typically attempts to learn a mapping from the (visual) feature space to the semantic space in which the classes are represented by a list of semantically meaningful attributes. However, the fact that this mapping is learned without using instances of the test classes affects the performance of ZSL, which is known as the domain shift problem. In this study, we propose to apply the learning vector quantization (LVQ) algorithm in the semantic space once the mapping is determined. First and foremost, this allows us to refine the prototypes of the test classes with respect to the learned mapping, which reduces the effects of the domain shift problem. Secondly, the LVQ algorithm increases the margin of the 1-NN classifier used in ZSL, resulting in better classification. Moreover, for this work, we consider a range of LVQ algorithms, from initial to advanced variants, and applied them to a number of state-of-the-art ZSL methods, then obtained their LVQ extensions. The experiments based on five ZSL benchmark datasets showed that the LVQ-empowered extensions of the ZSL methods are superior to their original counterparts in almost all settings. Key words: Zero-shot learning, learning vector quantization, image classification, prototype learning, large margin classifiers 1. Introduction When dealing with the problem of image classification/visual recognition, we usually assume that a number of labeled training instances is available for each class of interest. However, in practice, this may not be feasible, since collecting and annotating instances for each class results in a huge cost. Moreover, after training a classifier, new unseen classes may emerge dynamically [1]. In fact, new plant and animal species are constantly being discovered, making the classification of such target classes a challenge for image classification [1–3]. To address the above problem, zero-shot learning (ZSL) was developed, which was inspired by the ability of humans to identify novel cases/classes given a high-level description about them [3, 6]. In the context of ZSL, such descriptions are generally given as a list of semantically meaningful properties called attributes. These can be continuous word vectors [4] or binary vectors of visual properties, such as ”has tail”, ”is red” [3]. Using these attributes, ZSL aims to classify test/target classes for which no labeled training instances are available. ZSL achieves this task in the following way. ZSL assumes that the training classes, i.e. the classes whose labeled instances are available during training, are also represented by the same set of the attributes. This results in a space known as semantic (embedding) space that contains representations of both the test classes and training classes, i.e. their prototypes.tr
dc.relation.publicationcategoryUlusal Hakemli Dergide Makale - Kurum Öğretim Elemanıtr


Files in this item

This item appears in the following Collection(s)

Show simple item record