Face-MoGLE Logo

Mixture of Global and Local Experts with Diffusion Transformer for Controllable Face Generation


Xuechao Zou1*, Shun Zhang1*, Xing Fu2, Yue Li3, Kai Li4, Yushe Cao4, Congyan Lang1†, Pin Tao4, Junliang Xing4†

1 Beijing Jiaotong University  |  2 Ant Group  |  3 Qinghai University  |  4 Tsinghua University

* Equal contribution.  Corresponding authors.


Teaser image of Face-MoGLE framework

Abstract

Controllable face generation poses critical challenges in generative modeling due to the intricate balance required between semantic controllability and photorealism. While existing approaches struggle with disentangling semantic controls from generation pipelines, we revisit the architectural potential of Diffusion Transformers (DiTs) through the lens of expert specialization. This paper introduces Face-MoGLE, a novel framework featuring: (1) Semantic-decoupled latent modeling through mask-conditioned space factorization, enabling precise attribute manipulation; (2) A mixture of global and local experts that captures holistic structure and region-level semantics for fine-grained controllability; (3) A dynamic gating network producing time-dependent coefficients that evolve with diffusion steps and spatial locations. Face-MoGLE provides a powerful and flexible solution for high-quality, controllable face generation, with strong potential in generative modeling and security applications. Extensive experiments demonstrate its effectiveness in multimodal and monomodal face generation settings and its robust zero-shot generalization capability.


Method

Face-MoGLE framework overview

The Face-MoGLE framework extracts region-specific features from a semantic mask using a shared-weight VAE encoder, routing them to global and local experts. Their outputs are fused through a dynamic gating network, enabling high-fidelity generation with fine semantic alignment.

Zero-Shot Generalization on the MM-FFHQ-Female Dataset

Zero-Shot Generalization Results

Ablation Studies

Ablation Studies Results

BibTeX Citation

@misc{zou2025mixturegloballocalexperts,
	title={Mixture of Global and Local Experts with Diffusion Transformer for Controllable Face Generation},
	author={Xuechao Zou and Shun Zhang and Xing Fu and Yue Li and Kai Li and Yushe Cao and Congyan Lang and Pin Tao and Junliang Xing},
	year={2025},
	eprint={2509.00428},
	archivePrefix={arXiv},
	primaryClass={cs.CV},
	url={https://arxiv.org/abs/2509.00428}
}

Acknowledgements

This webpage was originally made by Matan Kleiner with the help of Hila Manor. The code for the original template can be found here.
Icons are taken from Font Awesome or from Academicons.