<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<site>sibgrapi.sid.inpe.br 802</site>
		<holdercode>{ibi 8JMKD3MGPEW34M/46T9EHH}</holdercode>
		<identifier>8JMKD3MGPEW34M/43BFBQE</identifier>
		<repository>sid.inpe.br/sibgrapi/2020/09.30.13.05</repository>
		<lastupdate>2020:09.30.13.05.44 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatarepository>sid.inpe.br/sibgrapi/2020/09.30.13.05.44</metadatarepository>
		<metadatalastupdate>2022:06.10.19.41.23 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2020}</metadatalastupdate>
		<doi>10.1109/SIBGRAPI51738.2020.00009</doi>
		<citationkey>FariaCarn:2020:WhArGe</citationkey>
		<title>Why are Generative Adversarial Networks so Fascinating and Annoying?</title>
		<format>On-line</format>
		<year>2020</year>
		<numberoffiles>1</numberoffiles>
		<size>8634 KiB</size>
		<author>Faria, Fabio Augusto,</author>
		<author>Carneiro, Gustavo,</author>
		<affiliation>Universidade Federal de São Paulo</affiliation>
		<affiliation>The University of Adelaide</affiliation>
		<editor>Musse, Soraia Raupp,</editor>
		<editor>Cesar Junior, Roberto Marcondes,</editor>
		<editor>Pelechano, Nuria,</editor>
		<editor>Wang, Zhangyang (Atlas),</editor>
		<e-mailaddress>ffaria@unifesp.br</e-mailaddress>
		<conferencename>Conference on Graphics, Patterns and Images, 33 (SIBGRAPI)</conferencename>
		<conferencelocation>Porto de Galinhas (virtual)</conferencelocation>
		<date>7-10 Nov. 2020</date>
		<publisher>IEEE Computer Society</publisher>
		<publisheraddress>Los Alamitos</publisheraddress>
		<booktitle>Proceedings</booktitle>
		<tertiarytype>Tutorial</tertiarytype>
		<transferableflag>1</transferableflag>
		<versiontype>finaldraft</versiontype>
		<keywords>GAN, machine learning, computer vision, deep learning.</keywords>
		<abstract>This paper focuses on one of the most fascinating and successful, but challenging generative models in the literature: the Generative Adversarial Networks (GAN). Recently, GAN has attracted much attention by the scientific community and the entertainment industry due to its effectiveness in generating complex and high-dimension data, which makes it a superior model for producing new samples, compared with other types of generative models. The traditional GAN (referred to as the Vanilla GAN) is composed of two neural networks, a generator and a discriminator, which are modeled using a minimax optimization. The generator creates samples to fool the discriminator that in turn tries to distinguish between the original and created samples. This optimization aims to train a model that can generate samples from the training set distribution. In addition to defining and explaining the Vanilla GAN and its main variations (e.g., DCGAN, WGAN, and SAGAN), this paper will present several applications that make GAN an extremely exciting method for the entertainment industry (e.g., style-transfer and image-to-image translation). Finally, the following  measures to assess the quality of generated images are presented: Inception Search (IS), and Frechet Inception Distance (FID).</abstract>
		<language>en</language>
		<targetfile>GAN_Tutorial_SIBGRAPI2020.pdf</targetfile>
		<usergroup>ffaria@unifesp.br</usergroup>
		<visibility>shown</visibility>
		<documentstage>not transferred</documentstage>
		<mirrorrepository>sid.inpe.br/banon/2001/03.30.15.38.24</mirrorrepository>
		<nexthigherunit>8JMKD3MGPEW34M/43G4L9S</nexthigherunit>
		<citingitemlist>sid.inpe.br/sibgrapi/2020/10.28.20.46 48</citingitemlist>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<username>ffaria@unifesp.br</username>
		<agreement>agreement.html .htaccess .htaccess2</agreement>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2020/09.30.13.05</url>
	</metadata>
</metadatalist>