Test environment running 7.6.3
 

FlowFace

dc.contributor.authorZeng, Haoen
dc.contributor.authorZhang, Weien
dc.contributor.authorFan, Changjieen
dc.contributor.authorLv, Tangjieen
dc.contributor.authorWang, Suzhenen
dc.contributor.authorZhang, Zhimengen
dc.contributor.authorMa, Bowenen
dc.contributor.authorLi, Linchengen
dc.contributor.authorDing, Yuen
dc.contributor.authorYu, Xinen
dc.date.accessioned2025-03-20T10:39:44Z
dc.date.available2025-03-20T10:39:44Z
dc.date.issued2023-06-27en
dc.description.abstractIn this work, we propose a semantic flow-guided two-stage framework for shape-aware face swapping, namely Flow-Face. Unlike most previous methods that focus on transferring the source inner facial features but neglect facial contours, our FlowFace can transfer both of them to a target face, thus leading to more realistic face swapping. Concretely, our FlowFace consists of a face reshaping network and a face swapping network. The face reshaping network addresses the shape outline differences between the source and target faces. It first estimates a semantic flow (i.e., face shape differences) between the source and the target face, and then explicitly warps the target face shape with the estimated semantic flow. After reshaping, the face swapping network generates inner facial features that exhibit the identity of the source face. We employ a pre-trained face masked autoencoder (MAE) to extract facial features from both the source face and the target face. In contrast to previous methods that use identity embedding to preserve identity information, the features extracted by our encoder can better capture facial appearances and identity information. Then, we develop a cross-attention fusion module to adaptively fuse inner facial features from the source face with the target facial attributes, thus leading to better identity preservation. Extensive quantitative and qualitative experiments on in-the-wild faces demonstrate that our FlowFace outperforms the state-of-the-art significantly.en
dc.description.sponsorshipThis work is supported by the 2022 Hangzhou Key Science and Technology Innovation Program (No. 2022AIZD0054), and the Key Research and Development Program of Zhe-jiang Province (No. 2022C01011), the ARC-Discovery grants (DP220100800) and ARC-DECRA (DE230100477). This work is supported by the 2022 Hangzhou Key Science and Technology Innovation Program (No. 2022AIZD0054), and the Key Research and Development Program of Zhejiang Province (No. 2022C01011), the ARC-Discovery grants (DP220100800) and ARC-DECRA (DE230100477).en
dc.description.statustrueen
dc.format.extent9en
dc.identifier.isbn9781577358800en
dc.identifier.otherScopus:85167997457en
dc.identifier.urihttps://dspace-test.anu.edu.au/handle/1885/733723648
dc.identifier.urlhttp://www.scopus.com/inward/record.url?scp=85167997457&partnerID=8YFLogxKen
dc.language.isoEnglishen
dc.relation.ispartofseriesProceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023en
dc.rightsPublisher Copyright: Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.en
dc.titleFlowFaceen
dc.typeConference contributionen
local.bibliographicCitation.lastpage3375en
local.bibliographicCitation.startpage3367en
local.contributor.affiliationZeng, Hao; NetEase Fuxi AI Laben
local.contributor.affiliationZhang, Wei; NetEase Fuxi AI Laben
local.contributor.affiliationFan, Changjie; NetEase Fuxi AI Laben
local.contributor.affiliationLv, Tangjie; NetEase Fuxi AI Laben
local.contributor.affiliationWang, Suzhen; NetEase Fuxi AI Laben
local.contributor.affiliationZhang, Zhimeng; NetEase Fuxi AI Laben
local.contributor.affiliationMa, Bowen; NetEase Fuxi AI Laben
local.contributor.affiliationLi, Lincheng; NetEase Fuxi AI Laben
local.contributor.affiliationDing, Yu; NetEase Fuxi AI Laben
local.contributor.affiliationYu, Xin; University of Technology Sydneyen
local.identifier.doi10.1609/aaai.v37i3.25444en
local.identifier.pure3ed87a76-d940-4343-b22b-34941a0280ccen
local.type.statusPublisheden

Downloads