Identity statement area
Reference TypeConference Proceedings
Last Update2018: administrator
Metadata Last Update2020: administrator
Citation KeyDuartePenaAlme:2018:BaAtVi
TitleBag of attributes for video event retrieval
DateOct. 29 - Nov. 1, 2018
Access Date2020, Dec. 04
Number of Files1
Size795 KiB
Context area
Author1 Duarte, Leonardo Assuane
2 Penatti, Otávio Augusto Bizetto
3 Almeida, Jurandy
Affiliation1 Universidade Federal de São Paulo - UNIFESP
2 SAMSUNG Research Institute
3 Universidade Federal de São Paulo - UNIFESP
EditorRoss, Arun
Gastal, Eduardo S. L.
Jorge, Joaquim A.
Queiroz, Ricardo L. de
Minetto, Rodrigo
Sarkar, Sudeep
Papa, João Paulo
Oliveira, Manuel M.
Arbeláez, Pablo
Mery, Domingo
Oliveira, Maria Cristina Ferreira de
Spina, Thiago Vallin
Mendes, Caroline Mazetto
Costa, Henrique Sérgio Gutierrez
Mejail, Marta Estela
Geus, Klaus de
Scheer, Sergio
Conference NameConference on Graphics, Patterns and Images, 31 (SIBGRAPI)
Conference LocationFoz do Iguaçu, PR, Brazil
Book TitleProceedings
PublisherIEEE Computer Society
Publisher CityLos Alamitos
History2018-08-28 03:44:14 :: -> administrator ::
2018-09-03 20:37:56 :: administrator -> :: 2018
2018-09-03 20:45:30 :: -> administrator :: 2018
2020-02-19 03:10:44 :: administrator -> :: 2018
Content and structure area
Is the master or a copy?is the master
Document Stagecompleted
Document Stagenot transferred
Content TypeExternal Contribution
Tertiary TypeFull Paper
Keywordsvideo event retrieval, video representation, visual dictionaries, semantics.
AbstractIn this paper, we present the Bag-of-Attributes (BoA) model for video representation aiming at video event retrieval. The BoA model is based on a semantic feature space for representing videos, resulting in high-level video feature vectors. For creating a semantic space, i.e., the attribute space, we can train a classifier using a labeled image dataset, obtaining a classification model that can be understood as a high-level codebook. This model is used to map low-level frame vectors into high-level vectors (e.g., classifier probability scores). Then, we apply pooling operations to the frame vectors to create the final bag of attributes for the video. In the BoA representation, each dimension corresponds to one category (or attribute) of the semantic space. Other interesting properties are: compactness, flexibility regarding the classifier, and ability to encode multiple semantic concepts in a single video representation. Our experiments considered the semantic space created by state-of-the-art convolutional neural networks pre-trained on 1000 object categories of ImageNet. Such deep neural networks were used to classify each video frame and then different coding strategies were used to encode the probability distribution from the softmax layer into a frame vector. Next, different pooling strategies were used to combine frame vectors in the BoA representation for a video. Results using BoA were comparable or superior to the baselines in the task of video event retrieval using the EVVE dataset, with the advantage of providing a much more compact representation.
source Directory Content
59paper.pdf 28/08/2018 00:44 794.7 KiB 
agreement Directory Content
agreement.html 28/08/2018 00:44 1.2 KiB 
Conditions of access and use area
Target File59paper.pdf
Allied materials area
Next Higher Units8JMKD3MGPAW/3RPADUS
Notes area
Empty Fieldsaccessionnumber archivingpolicy archivist area callnumber copyholder copyright creatorhistory descriptionlevel dissemination doi edition electronicmailaddress group holdercode isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url versiontype volume