Close

1. Identity statement
Reference TypeConference Paper (Conference Proceedings)
Sitesibgrapi.sid.inpe.br
Holder Codeibi 8JMKD3MGPEW34M/46T9EHH
Identifier8JMKD3MGPAW/3PFRBBL
Repositorysid.inpe.br/sibgrapi/2017/08.21.21.30
Last Update2017:08.21.21.30.41 (UTC) administrator
Metadata Repositorysid.inpe.br/sibgrapi/2017/08.21.21.30.41
Metadata Last Update2022:06.14.00.08.57 (UTC) administrator
DOI10.1109/SIBGRAPI.2017.25
Citation KeyAriasRamí:2017:SeAp
TitleLearning to Cluster with Auxiliary Tasks: A Semi-Supervised Approach
FormatOn-line
Year2017
Access Date2024, Oct. 15
Number of Files1
Size657 KiB
2. Context
Author1 Arias, Jhosimar George
2 Ramírez, Gerberth Adín
EditorTorchelsen, Rafael Piccin
Nascimento, Erickson Rangel do
Panozzo, Daniele
Liu, Zicheng
Farias, Mylène
Viera, Thales
Sacht, Leonardo
Ferreira, Nivan
Comba, João Luiz Dihl
Hirata, Nina
Schiavon Porto, Marcelo
Vital, Creto
Pagot, Christian Azambuja
Petronetto, Fabiano
Clua, Esteban
Cardeal, Flávio
e-Mail Addressjhosimar.figueroa@students.ic.unicamp.br
Conference NameConference on Graphics, Patterns and Images, 30 (SIBGRAPI)
Conference LocationNiterói, RJ, Brazil
Date17-20 Oct. 2017
PublisherIEEE Computer Society
Publisher CityLos Alamitos
Book TitleProceedings
Tertiary TypeFull Paper
History (UTC)2017-08-21 21:30:41 :: jhosimar.figueroa@students.ic.unicamp.br -> administrator ::
2022-06-14 00:08:57 :: administrator -> :: 2017
3. Content and structure
Is the master or a copy?is the master
Content Stagecompleted
Transferable1
Version Typefinaldraft
Keywordsdeep learning
generative models
clustering
semi-supervised learning
probabilistic models
AbstractIn this paper, we propose a model to learn a feature-category latent representation of the data, that is guided by a semi-supervised auxiliary task. The goal of this auxiliary task is to assign labels to unlabeled data and regularize the feature space. Our model is represented by a modified version of a Categorical Variational Autoencoder, i.e., a probabilistic generative model that approximates a categorical distribution with variational inference. We benefit from the autoencoders architecture to learn powerful representations with Deep Neural Networks in an unsupervised way, and to optimize the model with semi-supervised tasks. We derived a loss function that integrates the probabilistic model with our auxiliary task to guide the learning process. Experimental results show the effectiveness of our method achieving more than 90% of clustering accuracy by using only 100 labeled examples. Moreover we show that the learned features have discriminative properties that can be used for classification.
Arrangement 1urlib.net > SDLA > Fonds > SIBGRAPI 2017 > Learning to Cluster...
Arrangement 2urlib.net > SDLA > Fonds > Full Index > Learning to Cluster...
doc Directory Contentaccess
source Directory Contentthere are no files
agreement Directory Content
agreement.html 21/08/2017 18:30 1.2 KiB 
4. Conditions of access and use
data URLhttp://urlib.net/ibi/8JMKD3MGPAW/3PFRBBL
zipped data URLhttp://urlib.net/zip/8JMKD3MGPAW/3PFRBBL
Languageen
Target File138.pdf
User Groupjhosimar.figueroa@students.ic.unicamp.br
Visibilityshown
Update Permissionnot transferred
5. Allied materials
Mirror Repositorysid.inpe.br/banon/2001/03.30.15.38.24
Next Higher Units8JMKD3MGPAW/3PKCC58
8JMKD3MGPEW34M/4742MCS
Citing Item Listsid.inpe.br/sibgrapi/2017/09.12.13.04 37
sid.inpe.br/sibgrapi/2022/06.10.21.49 4
Host Collectionsid.inpe.br/banon/2001/03.30.15.38
6. Notes
Empty Fieldsaffiliation archivingpolicy archivist area callnumber contenttype copyholder copyright creatorhistory descriptionlevel dissemination edition electronicmailaddress group isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder schedulinginformation secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url volume


Close