Close
Metadata

Identity statement area
Reference TypeConference Proceedings
Sitesibgrapi.sid.inpe.br
Identifier8JMKD3MGPAW/3RN5TEE
Repositorysid.inpe.br/sibgrapi/2018/08.27.16.23
Last Update2018:08.27.16.57.45 administrator
Metadatasid.inpe.br/sibgrapi/2018/08.27.16.23.42
Metadata Last Update2020:02.19.03.10.44 administrator
Citation KeyCavallariRibePont:2018:DoCrFe
TitleUnsupervised representation learning using convolutional and stacked auto-encoders: a domain and cross-domain feature space analysis
FormatOn-line
Year2018
DateOct. 29 - Nov. 1, 2018
Access Date2020, Dec. 04
Number of Files1
Size1208 KiB
Context area
Author1 Cavallari, Gabriel B.
2 Ribeiro, Leonardo S. F.
3 Ponti, Moacir A.
Affiliation1 USP
2 USP
3 USP
EditorRoss, Arun
Gastal, Eduardo S. L.
Jorge, Joaquim A.
Queiroz, Ricardo L. de
Minetto, Rodrigo
Sarkar, Sudeep
Papa, João Paulo
Oliveira, Manuel M.
Arbeláez, Pablo
Mery, Domingo
Oliveira, Maria Cristina Ferreira de
Spina, Thiago Vallin
Mendes, Caroline Mazetto
Costa, Henrique Sérgio Gutierrez
Mejail, Marta Estela
Geus, Klaus de
Scheer, Sergio
e-Mail Addressmoacir@icmc.usp.br
Conference NameConference on Graphics, Patterns and Images, 31 (SIBGRAPI)
Conference LocationFoz do Iguaçu, PR, Brazil
Book TitleProceedings
PublisherIEEE Computer Society
Publisher CityLos Alamitos
History2018-08-27 16:57:45 :: moacir@icmc.usp.br -> administrator :: 2018
2020-02-19 03:10:44 :: administrator -> :: 2018
Content and structure area
Is the master or a copy?is the master
Document Stagecompleted
Document Stagenot transferred
Transferable1
Content TypeExternal Contribution
Tertiary TypeFull Paper
KeywordsDeep Learning, Representation learning, Feature extraction, Unsupervised feature learning.
AbstractA feature learning task involves training models that are capable of inferring good representations (transformations of the original space) from input data alone. When working with limited or unlabelled data, and also when multiple visual domains are considered, methods that rely on large annotated datasets, such as Convolutional Neural Networks (CNNs), cannot be employed. In this paper we investigate different auto-encoder (AE) architectures, which require no labels, and explore training strategies to learn representations from images. The models are evaluated considering both the reconstruction error of the images and the feature spaces in terms of their discriminative power. We study the role of dense and convolutional layers on the results, as well as the depth and capacity of the networks, since those are shown to affect both the dimensionality reduction and the capability of generalising for different visual domains. Classification results with AE features were as discriminative as pre-trained CNN features. Our findings can be used as guidelines for the design of unsupervised representation learning methods within and across domains.
source Directory Content
sibgrapi-2018_Analysis_of_cross_domain_unsupervised_learning.pdf 27/08/2018 13:23 1.2 MiB
agreement Directory Content
agreement.html 27/08/2018 13:23 1.2 KiB 
Conditions of access and use area
Languageen
Target Filesibgrapi-2018_Analysis_of_cross_domain_unsupervised_learning.pdf
User Groupmoacir@icmc.usp.br
Visibilityshown
Allied materials area
Mirror Repositorysid.inpe.br/banon/2001/03.30.15.38.24
Next Higher Units8JMKD3MGPAW/3RPADUS
Host Collectionsid.inpe.br/banon/2001/03.30.15.38
Notes area
Empty Fieldsaccessionnumber archivingpolicy archivist area callnumber copyholder copyright creatorhistory descriptionlevel dissemination doi edition electronicmailaddress group holdercode isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url versiontype volume

Close