<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<identifier>6qtX3pFwXQZeBBx/GGUru</identifier>
		<repository>sid.inpe.br/banon/2005/07.09.13.12</repository>
		<metadatarepository>sid.inpe.br/banon/2005/07.09.13.12.22</metadatarepository>
		<site>sibgrapi.sid.inpe.br 802</site>
		<citationkey>TorreãoFern:2005:SiShDe</citationkey>
		<author>Torreão, José Ricardo de Almeida,</author>
		<author>Fernandes, João Luiz,</author>
		<affiliation>Instituto de Computação, Universidade Federal Fluminense,</affiliation>
		<title>Single-image shape from defocus</title>
		<conferencename>Brazilian Symposium on Computer Graphics and Image Processing, 18 (SIBGRAPI)</conferencename>
		<year>2005</year>
		<editor>Rodrigues, Maria Andréia Formico,</editor>
		<editor>Frery, Alejandro César,</editor>
		<booktitle>Proceedings</booktitle>
		<date>9-12 Oct. 2005</date>
		<publisheraddress>Los Alamitos</publisheraddress>
		<publisher>IEEE Computer Society</publisher>
		<conferencelocation>Natal</conferencelocation>
		<keywords>shape from defocus, shape from shading.</keywords>
		<abstract>The limited depth of field causes scene points at various distances from a camera to be imaged with different amounts of defocus. If images captured under different aperture settings are available, the defocus measure can be estimated and used for 3D scene reconstruction. Usually, defocusing is modeled by gaussian convolution over local image patches, but the estimation of a defocus measure based on that is hampered by the spurious high-frequencies introduced by windowing. Here we show that this can be ameliorated by the use of unnormalized gaussians, which allow defocus estimation from the zero-frequency Fourier component of the image patches, thus avoiding spurious high frequencies. As our main contribution, we also show that the modified shape from defocus approach can be extended to shape estimation from single shading inputs. This is done by simulating an aperture change, via gaussian convolution, in order to generate the second image required for defocus estimation. As proven here, the gaussian-blurred image carries an explicit depth-dependent blur component - which is missing from an ideal shading input -, and thus allows depth estimation as in the multi-image case.</abstract>
		<language>en</language>
		<tertiarytype>Full Paper</tertiarytype>
		<format>On-line</format>
		<size>139 KiB</size>
		<numberoffiles>1</numberoffiles>
		<targetfile>torreaoj_defocus.pdf</targetfile>
		<lastupdate>2005:07.09.03.00.00 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatalastupdate>2020:02.19.03.19.10 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2005}</metadatalastupdate>
		<e-mailaddress>jrat@ic.uff.br</e-mailaddress>
		<usergroup>jrat administrator</usergroup>
		<visibility>shown</visibility>
		<transferableflag>1</transferableflag>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<contenttype>External Contribution</contenttype>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/banon/2005/07.09.13.12</url>
	</metadata>
</metadatalist>