DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control

 

Shu-Yu Chen1,2 *          Feng-Lin Liu1,2 *          Yu-Kun Lai3          Paul L. Rosin3          Chunpeng Li1,2          Hongbo Fu4       Lin Gao1,2 †         

 

 

1 Institute of Computing Technology, Chinese Academy of Sciences

 

2 University of Chinese Academy of Sciences

 

3 Cardiff University      

 

4 City University of Hong Kong      

 

* Authors contributed equally  

 

Corresponding author  

 

 

 

Accepted by Siggraph 2021

 

 

 

 

Figure:Our DeepFaceEditing method allows users to intuitively edit a face image to manipulate its geometry and appearance with detailed control. Given a portrait image (a), our method disentangles its geometry and appearance, and the resulting representations can faithfully reconstruct the input image (d). We show a range of flexible face editing tasks that can be achieved with our unified framework: (b) changing the appearance according to the given reference images while retaining the geometry, (c) replacing the geometry of the face with a sketch while keeping the appearance, (e) editing the geometry using sketches, and (f) editing both the geometry and appearance. The inputs used to control the appearance and geometry are shown as small images with green and orange borders, respectively.

 

 

 

Abstract

 

Recent facial image synthesis methods have been mainly based on conditional generative models. Sketch-based conditions can effectively describe the geometry of faces, including the contours of facial components, hair structures, as well as salient edges (e.g., wrinkles) on face surfaces but lack effective control of appearance, which is influenced by color, material, lighting condition, etc. To have more control of generated results, one possible approach is to apply existing disentangling works to disentangle face images into geometry and appearance representations. However, existing disentangling methods are not optimized for human face editing, and cannot achieve fine control of facial details such as wrinkles. To address this issue, we propose DeepFaceEditing, a structured disentanglement framework specifically designed for face images to support face generation and editing with disentangled control of geometry and appearance. We adopt a local-to-global approach to incorporate the face domain knowledge: local component images are decomposed into geometry and appearance representations, which are fused consistently using a global fusion module to improve generation quality. We exploit sketches to assist in extracting a better geometry representation, which also supports intuitive geometry editing via sketching. The resulting method can either extract the geometry and appearance representations from face images, or directly extract the geometry representation from face sketches. Such representations allow users to easily edit and synthesize face images, with decoupled control of their geometry and appearance. Both qualitative and quantitative evaluations show the superior detail and appearance control abilities of our method compared to state-of-the-art methods.

 

 

paper thumbnail

 

Paper

 

 

DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control

 

Code

 

Jittor

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

DeepFaceDrawing: Deep Generation of Face Images from Sketches

 

Shu-Yu Chen1 *          Wanchao Su2 *          Lin Gao1 †          Shihong Xia1          Hongbo Fu2      

 

 

1Institute of Computing Technology, Chinese Academy of Sciences

 

2City University of Hong Kong      

 

*Authors contributed equally  

 

Corresponding author  

 

 

 

Accepted by Siggraph 2020

 

 

 

 

Figure: Our DeepFaceDrawing system allows users with little training in drawing to produce high-quality face images (Bottom) from rough or even incomplete freehand sketches (Top). Note that our method faithfully respects user intentions in input strokes, which serve more like soft constraints to guide image synthesis.

 

 

 

Abstract

 

Recent deep image-to-image translation techniques allow fast generation of face images from freehand sketches. However, existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input. To address this issue, our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch. We take a local-to-global approach. We first learn feature embeddings of key face components, and push corresponding parts of input sketches towards underlying component manifolds defined by the feature vectors of face component samples. We also propose another deep neural network to learn the mapping from the embedded component features to realistic images with multi-channel feature maps as intermediate results to improve the information flow. Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches. Our tool is easy to use even for non-artists, while still supporting fine-grained control of shape details. Both qualitative and quantitative evaluations show the superior generation ability of our system to existing and alternative solutions. The usability and expressiveness of our system are confirmed by a user study.

 

paper thumbnail

Paper

 

DeepFaceDrawing: Deep Generation of Face Images from Sketches

Supplemental Materials

 

Code

 

Jittor     Pytorch[Comming soon]

 

Video

 

Demo    YouTube    SIGGRAPH Technical Papers Preview Trailer

 

System

 

Online System    中文测试系统 欢迎点击试用    基于亚洲人脸训练的测试系统 欢迎点击试用

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Popular Press

 

 

 

 

paper thumbnail paper thumbnail paper thumbnail paper thumbnail paper thumbnail paper thumbnail
paper thumbnail paper thumbnail paper thumbnail paper thumbnail paper thumbnail paper thumbnail
paper thumbnail paper thumbnail paper thumbnail paper thumbnail paper thumbnail paper thumbnail
paper thumbnail paper thumbnail paper thumbnail paper thumbnail paper thumbnail paper thumbnail

 

 

 

 

 

 

BibTex

 

 

@article {chenDeepFaceEditing2021,
    author = {Chen, Shu-Yu and Liu, Feng-Lin and Lai, Yu-Kun and Rosin, Paul L. and Li, Chunpeng and Fu, Hongbo and Gao, Lin},
    title = {{DeepFaceEditing}: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control},
    journal = {ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH 2021)},
    year = {2021},
    volume = 40,
    pages = {90:1--90:15},
    number = 4
}

 

 

@article {chenDeepFaceDrawing2020,
    author = {Chen, Shu-Yu and Su, Wanchao and Gao, Lin and Xia, Shihong and Fu, Hongbo},
    title = {{DeepFaceDrawing}: Deep Generation of Face Images from Sketches},
    journal = {ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH 2020)},
    year = {2020},
    volume = 39,
    pages = {72:1--72:16},
    number = 4
}

 

 

 

 

 

 

 


 

Last updated on June, 2021.