Reference TypeConference Proceedings
Citation KeyMattosMenaCesaVelh:2010:3DLiFa
Author1 Mattos, Andréa Britto
2 Mena-Chalco, Jesús Pascual
3 Cesar Junior, Roberto Marcondes
4 Velho, Luiz
Affiliation1 Universidade de São Paulo
2 Universidade de São Paulo
3 Universidade de São Paulo
4 Instituto Nacional de Matemática Pura e Aplicada
Title3D linear facial animation based on real data
Conference NameConference on Graphics, Patterns and Images, 23 (SIBGRAPI)
EditorBellon, Olga
Esperança, Claudio
Book TitleProceedings
DateAug. 30 - Sep. 3, 2010
Publisher CityLos Alamitos
PublisherIEEE Computer Society
Conference LocationGramado
KeywordsComputer graphics, facial animation, 3D reconstruction.
AbstractIn this paper we introduce a Facial Animation system using real three-dimensional models of people, acquired by a 3D scanner. We consider a dataset composed by models displaying different facial expressions and a linear interpolation technique is used to produce a smooth transition between them. One-to-one correspondences between the meshes of each facial expression are required in order to apply the interpolation process. Instead of focusing in the computation of dense correspondence, some points are selected and a triangulation is defined, being refined by consecutive subdivisions, that compute the matchings of intermediate points. We are able to animate any model of the dataset, given its texture information for the neutral face and the geometry information for all the expressions along with the neutral face. This is made by computing matrices with the variations of every vertex when changing from the neutral face to the other expressions. The knowledge of the matrices obtained in this process makes it possible to animate other models given only the texture and geometry information of the neutral face. Furthermore, the system uses 3D reconstructed models, being capable of generating a three-dimensional facial animation from a single 2D image of a person. Also, as an extension of the system, we use artificial models that contain expressions of visemes, that are not part of the expressions of the dataset, and their displacements are applied to the real models. This allows these models to be given as input to a speech synthesis application in which the face is able to speak phrases typed by the user. Finally, we generate an average face and increase the displacements between a subject from the dataset and the average face, creating, automatically, a caricature of the subject.
Tertiary TypeFull Paper
FormatPrinted, On-line.
Size1351 KiB
Number of Files1
Target File3D Linear Facial Animation Based on Real Data.pdf
Last Update2010: administrator
Metadata Last Update2020: administrator {D 2010}
Document Stagecompleted
Is the master or a copy?is the master
Content TypeExternal Contribution
source Directory Contentthere are no files
agreement Directory Contentthere are no files
History2010-10-01 04:19:37 :: -> administrator :: 2010
2020-02-20 21:48:54 :: administrator -> :: 2010
Empty Fieldsaccessionnumber archivingpolicy archivist area callnumber copyholder copyright creatorhistory descriptionlevel dissemination documentstage doi edition electronicmailaddress group holdercode isbn issn label lineage mark nexthigherunit notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url versiontype volume
Access Date2020, Nov. 25