<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<identifier>8JMKD3MGPBW34M/3EDGFSB</identifier>
		<repository>sid.inpe.br/sibgrapi/2013/07.04.21.42</repository>
		<metadatarepository>sid.inpe.br/sibgrapi/2013/07.04.21.42.45</metadatarepository>
		<site>sibgrapi.sid.inpe.br 802</site>
		<citationkey>RauberFalcSpinReze:2013:InSeIm</citationkey>
		<author>Rauber, Paulo Eduardo,</author>
		<author>Falc„o, Alexandre Xavier,</author>
		<author>Spina, Thiago Vallin,</author>
		<author>Rezende, Pedro Jussieu de,</author>
		<affiliation>University of Campinas (UNICAMP)</affiliation>
		<affiliation>University of Campinas (UNICAMP)</affiliation>
		<affiliation>University of Campinas (UNICAMP)</affiliation>
		<affiliation>University of Campinas (UNICAMP)</affiliation>
		<title>Interactive segmentation by image foresting transform on superpixel graphs</title>
		<conferencename>Conference on Graphics, Patterns and Images, 26 (SIBGRAPI)</conferencename>
		<year>2013</year>
		<editor>Boyer, Kim,</editor>
		<editor>Hirata, Nina,</editor>
		<editor>Nedel, Luciana,</editor>
		<editor>Silva, Claudio,</editor>
		<booktitle>Proceedings</booktitle>
		<date>Aug. 5-8, 2013</date>
		<publisheraddress>Los Alamitos</publisheraddress>
		<publisher>IEEE Computer Society</publisher>
		<conferencelocation>Arequipa, Peru</conferencelocation>
		<keywords>graph-based image segmentation, image foresting transform, robot users, interactive segmentation.</keywords>
		<abstract>There are many scenarios in which user interaction is essential for effective image segmentation. In this paper, we present a new interactive segmentation method based on the Image Foresting Transform (IFT). The method oversegments the input image, creates a graph based on these segments (superpixels), receives markers (labels) drawn by the user on some superpixels and organizes a competition to label every pixel in the image. Our method has several interesting properties: it is effective, efficient, capable of segmenting multiple objects in almost linear time on the number of superpixels, readily extendable through previously published techniques, and benefits from domain-specific feature extraction. We also present a comparison with another technique based on the IFT, which can be seen as its pixel-based counterpart. Another contribution of this paper is the description of automatic (robot) users. Given a ground truth image, these robots simulate interactive segmentation by trained and untrained users, reducing the costs and biases involved in comparing segmentation techniques.</abstract>
		<language>en</language>
		<tertiarytype>Full Paper</tertiarytype>
		<format>On-line.</format>
		<size>3983 KiB</size>
		<numberoffiles>1</numberoffiles>
		<targetfile>article_checked.pdf</targetfile>
		<lastupdate>2013:07.04.21.42.45 sid.inpe.br/banon/2001/03.30.15.38 pauloeduardorauber@gmail.com</lastupdate>
		<metadatalastupdate>2020:02.19.03.09.22 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2013}</metadatalastupdate>
		<mirrorrepository>sid.inpe.br/banon/2001/03.30.15.38.24</mirrorrepository>
		<e-mailaddress>pauloeduardorauber@gmail.com</e-mailaddress>
		<usergroup>pauloeduardorauber@gmail.com</usergroup>
		<visibility>shown</visibility>
		<transferableflag>1</transferableflag>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<contenttype>External Contribution</contenttype>
		<agreement>agreement.html .htaccess .htaccess2</agreement>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2013/07.04.21.42</url>
	</metadata>
</metadatalist>