<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<identifier>8JMKD3MGPEW34M/43BHB8L</identifier>
		<repository>sid.inpe.br/sibgrapi/2020/09.30.23.54</repository>
		<metadatarepository>sid.inpe.br/sibgrapi/2020/09.30.23.54.49</metadatarepository>
		<site>sibgrapi.sid.inpe.br 802</site>
		<citationkey>CordeiroCarn:2020:HoTrYo</citationkey>
		<author>Cordeiro, Filipe Rolim,</author>
		<author>Carneiro, Gustavo,</author>
		<affiliation>Universidade Federal Rural de Pernambuco</affiliation>
		<affiliation>University of Adelaide</affiliation>
		<title>A Survey on Deep Learning with Noisy Labels: How to train your model when you cannot trust on the annotations?</title>
		<conferencename>Conference on Graphics, Patterns and Images, 33 (SIBGRAPI)</conferencename>
		<year>2020</year>
		<editor>Musse, Soraia Raupp,</editor>
		<editor>Cesar Junior, Roberto Marcondes,</editor>
		<editor>Pelechano, Nuria,</editor>
		<editor>Wang, Zhangyang (Atlas),</editor>
		<booktitle>Proceedings</booktitle>
		<date>Nov. 7-10, 2020</date>
		<publisheraddress>Los Alamitos</publisheraddress>
		<publisher>IEEE Computer Society</publisher>
		<conferencelocation>Virtual</conferencelocation>
		<keywords>noisy labels, deep learning.</keywords>
		<abstract>Noisy Labels are commonly present in data sets automatically collected from the internet, mislabeled by non- specialist annotators, or even specialists in a challenging task, such as in the medical field. Although deep learning models have shown significant improvements in different domains, an open issue is their ability to memorize noisy labels during training, reducing their generalization potential. As deep learning models depend on correctly labeled data sets and label correctness is difficult to guarantee, it is crucial to consider the presence of noisy labels for deep learning training. Several approaches have been proposed in the literature to improve the training of deep learning models in the presence of noisy labels. This paper presents a survey on the main techniques in literature, in which we classify the algorithm in the following groups: robust losses, sample weighting, sample selection, meta-learning, and combined approaches. We also present the commonly used experimental setup, data sets, and results of the state-of-the-art models.</abstract>
		<language>en</language>
		<tertiarytype>Tutorial</tertiarytype>
		<format>On-line</format>
		<size>494 KiB</size>
		<numberoffiles>1</numberoffiles>
		<targetfile>Tutorial_ID_4_SIBGRAPI_2020_camara_ready_v2 copy.pdf</targetfile>
		<lastupdate>2020:09.30.23.54.49 sid.inpe.br/banon/2001/03.30.15.38 filipe.rolim@ufrpe.br</lastupdate>
		<metadatalastupdate>2020:10.28.20.46.59 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2020}</metadatalastupdate>
		<mirrorrepository>sid.inpe.br/banon/2001/03.30.15.38.24</mirrorrepository>
		<e-mailaddress>filipe.rolim@ufrpe.br</e-mailaddress>
		<username>filipe.rolim@ufrpe.br</username>
		<usergroup>filipe.rolim@ufrpe.br</usergroup>
		<visibility>shown</visibility>
		<transferableflag>1</transferableflag>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<contenttype>External Contribution</contenttype>
		<documentstage>not transferred</documentstage>
		<nexthigherunit>8JMKD3MGPEW34M/43G4L9S</nexthigherunit>
		<agreement>agreement.html .htaccess .htaccess2</agreement>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2020/09.30.23.54</url>
	</metadata>
</metadatalist>