Identity statement area
Reference TypeConference Paper (Conference Proceedings)
Last Update2017: administrator
Metadata Last Update2020: administrator
Citation KeyCaetanoMeloSantSchw:2017:AcReBa
TitleActivity Recognition based on a Magnitude-Orientation Stream Network
DateOct. 17-20, 2017
Access Date2021, Jan. 21
Number of Files1
Size1018 KiB
Context area
Author1 Caetano, Carlos
2 Melo, Victor H. C. de
3 Santos, Jefersson A. dos
4 Schwartz, William Robson
Affiliation1 Universidade Federal de Minas Gerais
2 Universidade Federal de Minas Gerais
3 Universidade Federal de Minas Gerais
4 Universidade Federal de Minas Gerais
EditorTorchelsen, Rafael Piccin
Nascimento, Erickson Rangel do
Panozzo, Daniele
Liu, Zicheng
Farias, Mylène
Viera, Thales
Sacht, Leonardo
Ferreira, Nivan
Comba, João Luiz Dihl
Hirata, Nina
Schiavon Porto, Marcelo
Vital, Creto
Pagot, Christian Azambuja
Petronetto, Fabiano
Clua, Esteban
Cardeal, Flávio
Conference NameConference on Graphics, Patterns and Images, 30 (SIBGRAPI)
Conference LocationNiterói, RJ
Book TitleProceedings
PublisherIEEE Computer Society
Publisher CityLos Alamitos
Tertiary TypeFull Paper
History2017-08-17 15:31:46 :: -> administrator ::
2020-02-19 02:01:22 :: administrator -> :: 2017
Content and structure area
Is the master or a copy?is the master
Content Stagecompleted
Content TypeExternal Contribution
KeywordsMagnitude, Orientation, Stream Network, Convolutional Neural Networks.
AbstractThe temporal component of videos provides an important clue for activity recognition, as a number of activities can be reliably recognized based on the motion information. In view of that, this work proposes a novel temporal stream for two-stream convolutional networks based on images computed from the optical flow magnitude and orientation, named Magnitude-Orientation Stream (MOS), to learn the motion in a better and richer manner. Our method applies simple nonlinear transformations on the vertical and horizontal components of the optical flow to generate input images for the temporal stream. Experimental results, carried on two well-known datasets (HMDB51 and UCF101), demonstrate that using our proposed temporal stream as input to existing neural network architectures can improve their performance for activity recognition. Results demonstrate that our temporal stream provides complementary information able to improve the classical two-stream methods, indicating the suitability of our approach to be used as a temporal video representation.
source Directory Contentthere are no files
agreement Directory Content
agreement.html 17/08/2017 12:31 1.2 KiB 
Conditions of access and use area
data URL
zipped data URL
Target Filemain Certified by IEEE PDF eXpress.pdf
Update Permissionnot transferred
Allied materials area
Next Higher Units8JMKD3MGPAW/3PJT9LS
Notes area
Empty Fieldsaccessionnumber archivingpolicy archivist area callnumber copyholder copyright creatorhistory descriptionlevel dissemination doi edition electronicmailaddress group holdercode isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url versiontype volume