Close
Metadata

Reference TypeConference Proceedings
Identifier8JMKD3MGPBW34M/3A3LQGS
Repositorysid.inpe.br/sibgrapi/2011/07.11.01.52
Metadatasid.inpe.br/sibgrapi/2011/07.11.01.52.29
Sitesibgrapi.sid.inpe.br
Citation KeyLopesSanValAlmAra:2011:TrLeHu
Author1 Lopes, Ana Paula B.
2 Santos, Elerson R. da S.
3 Valle, Eduardo A.
4 Almeida, Jussara M. de
5 Ara˙jo, Arnaldo A. de
Affiliation1 Depart. of Computer Science - Universidade Federal de Minas Gerais (UFMG),Belo Horizonte (MG), Brazil and Depart. of Exact and Tech. Sciences - Universidade Estadual de Santa Cruz (UESC),ilhÚus, Brazil
2 Depart. of Computer Science - Universidade Federal de Minas Gerais (UFMG),Belo Horizonte (MG), Brazil
3 Universidade Estadual de Campinas (UNICAMP), Campinas (SP), Brazil
4 Depart. of Computer Science - Universidade Federal de Minas Gerais (UFMG),Belo Horizonte (MG), Brazil
5 Depart. of Computer Science - Universidade Federal de Minas Gerais (UFMG),Belo Horizonte (MG), Brazil
TitleTransfer Learning for Human Action Recognition
Conference NameConference on Graphics, Patterns and Images, 24 (SIBGRAPI)
Year2011
EditorLewiner, Thomas
Torres, Ricardo
Book TitleProceedings
DateAug. 28 - 31, 2011
Publisher CityLos Alamitos
PublisherIEEE Computer Society Conference Publishing Services
Conference LocationMaceiˇ
Keywordsaction recognition, transfer learning, bags-of-visual-features, video understanding.
AbstractTo manually collect action samples from realistic videos is a time-consuming and error-prone task. This is a serious bottleneck to research related to video understanding, since the large intra-class variations of such videos demand training sets large enough to properly encompass those variations. Most authors dealing with this issue rely on (semi-) automated procedures to collect additional, generally noisy, examples. In this paper, we exploit a different approach, based on a Transfer Learning (TL) technique, to address the target task of action recognition. More specifically, we propose a framework that transfers the knowledge about concepts from a previously labeled still image database to the target action video database. It is assumed that, once identified in the target action database, these concepts provide some contextual clues to the action classifier. Our experiments with Caltech256 and Hollywood2 databases indicate: a) the feasibility of successfully using transfer learning techniques to detect concepts and, b) that it is indeed possible to enhance action recognition with the transferred knowledge of even a few concepts. In our case, only four concepts were enough to obtain statistically significant improvements for most actions.
Languageen
Tertiary TypeFull Paper
FormatDVD, On-line.
Size532 KiB
Number of Files1
Target FilePID1979911.pdf
Last Update2011:07.19.13.42.54 sid.inpe.br/banon/2001/03.30.15.38 elerss@gmail.com
Metadata Last Update2011:07.23.15.36.12 sid.inpe.br/banon/2001/03.30.15.38 elerss@gmail.com {D 2011}
Document Stagecompleted
Is the master or a copy?is the master
Mirrorsid.inpe.br/banon/2001/03.30.15.38.24
e-Mail Addresselerss@gmail.com
User Groupelerss@gmail.com
Visibilityshown
Transferable1
Host Collectionsid.inpe.br/banon/2001/03.30.15.38
Content TypeExternal Contribution
source Directory Contentthere are no files
agreement Directory Content
agreement.html 10/07/2011 22:52 0.5 KiB
History2011-07-23 15:36:12 :: elerss@gmail.com -> :: 2011
Empty Fieldsaccessionnumber archivingpolicy archivist area callnumber copyholder copyright creatorhistory descriptionlevel dissemination documentstage doi edition electronicmailaddress group holdercode isbn issn label lineage mark nextedition nexthigherunit notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url versiontype volume
Access Date2019, Dec. 09

Close