Identity statement area
Reference TypeConference Proceedings
Last Update2018: administrator
Metadata Last Update2020: administrator
Citation KeyGaioJúniorSant:2018:MeOpCl
TitleA method for opinion classification in video combining facial expressions and gestures
DateOct. 29 - Nov. 1, 2018
Access Date2020, Dec. 02
Number of Files1
Size742 KiB
Context area
Author1 Gaio Júnior, Airton
2 Santos, Eulanda Miranda dos
Affiliation1 Federal University of Amazonas - UFAM
2 Federal University of Amazonas - UFAM
EditorRoss, Arun
Gastal, Eduardo S. L.
Jorge, Joaquim A.
Queiroz, Ricardo L. de
Minetto, Rodrigo
Sarkar, Sudeep
Papa, João Paulo
Oliveira, Manuel M.
Arbeláez, Pablo
Mery, Domingo
Oliveira, Maria Cristina Ferreira de
Spina, Thiago Vallin
Mendes, Caroline Mazetto
Costa, Henrique Sérgio Gutierrez
Mejail, Marta Estela
Geus, Klaus de
Scheer, Sergio
Conference NameConference on Graphics, Patterns and Images, 31 (SIBGRAPI)
Conference LocationFoz do Iguaçu, PR, Brazil
Book TitleProceedings
PublisherIEEE Computer Society
Publisher CityLos Alamitos
History2018-09-04 01:28:04 :: -> administrator ::
2020-02-19 03:10:44 :: administrator -> :: 2018
Content and structure area
Is the master or a copy?is the master
Document Stagecompleted
Document Stagenot transferred
Content TypeExternal Contribution
Tertiary TypeFull Paper
Keywordsopinion classification, video, facial expression, gesture body expression, FV, VLAD, encoder.
AbstractMost of the researches dealing with video-based opinion recognition problems employ the combination of data from three different sources: video, audio and text. As a consequence, they are solutions based on complex and language-dependent models. Besides such complexity, it may be observed that these current solutions attain low performance in practical applications. Focusing on overcoming these drawbacks, this work presents a method for opinion classification that uses only video as data source, more precisely, facial expression and body gesture information are extracted from online videos and combined to lead to higher classification rates. The proposed method uses feature encoding strategies to improve data representation and to facilitate the classification task in order to predict user's opinion with high accuracy and independently of the language used in videos. Experiments were carried out using three public databases and three baselines to test the proposed method. The results of these experiments show that, even performing only visual analysis of the videos, the proposed method achieves 16% higher accuracy and precision rates, when compared to baselines that analyze visual, audio and textual data video. Moreover, it is showed that the proposed method may identify emotions in videos whose language is other than the language used for training.
source Directory Contentthere are no files
agreement Directory Content
agreement.html 03/09/2018 22:28 1.2 KiB 
Conditions of access and use area
Target Filemethod-opinion-classification_id_94.pdf
Allied materials area
Next Higher Units8JMKD3MGPAW/3RPADUS
Notes area
Empty Fieldsaccessionnumber archivingpolicy archivist area callnumber copyholder copyright creatorhistory descriptionlevel dissemination doi edition electronicmailaddress group holdercode isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url versiontype volume