Identity statement area
Reference TypeConference Paper (Conference Proceedings)
Last Update2017:
Metadata Last Update2020: administrator
Citation KeyRamosCampNasc:2017:SeHyEg
TitleSemantic Hyperlapse for Egocentric Videos
Access Date2021, Jan. 25
Number of Files1
Size612 KiB
Context area
Author1 Ramos, Washington Luis de Souza
2 Campos, Mario Fernando Montenegro
3 Nascimento, Erickson Rangel do
Affiliation1 Universidade Federal de Minas Gerais (UFMG)
2 Universidade Federal de Minas Gerais (UFMG)
3 Universidade Federal de Minas Gerais (UFMG)
EditorTorchelsen, Rafael Piccin
Nascimento, Erickson Rangel do
Panozzo, Daniele
Liu, Zicheng
Farias, Mylène
Viera, Thales
Sacht, Leonardo
Ferreira, Nivan
Comba, João Luiz Dihl
Hirata, Nina
Schiavon Porto, Marcelo
Vital, Creto
Pagot, Christian Azambuja
Petronetto, Fabiano
Clua, Esteban
Cardeal, Flávio
Conference NameConference on Graphics, Patterns and Images, 30 (SIBGRAPI)
Conference LocationNiterói, RJ
DateOct. 17-20, 2017
Book TitleProceedings
PublisherSociedade Brasileira de Computação
Publisher CityPorto Alegre
Tertiary TypeMaster's or Doctoral Work
History2017-09-07 17:08:56 :: -> administrator ::
2020-02-20 22:06:47 :: administrator -> :: 2017
Content and structure area
Is the master or a copy?is the master
Content Stagecompleted
KeywordsHyperlapse, Fast-forward, Semantic information, First-person video.
AbstractThe emergence of low-cost personal mobiles devices and wearable cameras and, the increasing storage capacity of video-sharing websites have pushed forward a growing interest towards first-person videos. Wearable cameras can operate for hours without the need for continuous handling. These videos are generally long-running streams with unedited content, which makes them boring and visually unpalatable since the natural body movements cause the videos to be jerky and even nauseating. Hyperlapse algorithms aim to create a shorter watchable version with no abrupt transitions between the frames. However, an important aspect of such videos is the relevance of the frames, usually ignored in hyperlapse videos. In this work, we propose a novel methodology capable of summarizing and stabilizing egocentric videos by extracting and analyzing the semantic information in the frames. This work also describes a dataset collection with several labeled videos and introduces a new smoothness evaluation metric for egocentric videos. Several experiments are conducted to show the superiority of our approach over the state-of-the-art hyperlapse algorithms as far as semantic information is concerned. According to the results, our method is on average 10.67 percentage points higher than the second best in relation to the maximum amount of semantics that can be obtained, given the required speed-up. More information can be found in our supplementary video:
source Directory Contentthere are no files
agreement Directory Content
agreement.html 07/09/2017 14:08 1.2 KiB 
Conditions of access and use area
data URL
zipped data URL
Target File2017_wtd_sibgrapi.pdf
Update Permissionnot transferred
Allied materials area
Next Higher Units8JMKD3MGPAW/3PJT9LS
Notes area
Empty Fieldsaccessionnumber archivingpolicy archivist area callnumber contenttype copyholder copyright creatorhistory descriptionlevel dissemination doi edition electronicmailaddress group holdercode isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url versiontype volume