PARTE: Part-Guided Texturing for 3D Human Reconstruction from a Single Image

1Seoul National University, 2Korea University
*Equal contribution, Corresponding author
ICCV 2025

Abstract

The misaligned human texture across different human parts is one of the main limitations of existing 3D human reconstruction methods. Each human part, such as a jacket or pants, should maintain a distinct texture without blending into others. The structural coherence of human parts serves as a crucial cue to infer human textures in the invisible regions of a single image. However, most existing 3D human reconstruction methods do not explicitly exploit such part segmentation priors, leading to misaligned textures in their reconstructions.

In this regard, we present PARTE, which uses 3D human part information as a key guide to reconstruct 3D human textures. Our framework comprises two core components. First, to infer 3D human part information from a single image, we propose a 3D part segmentation module (PartSegmenter) that initially reconstructs a textureless human surface and predicts human part labels based on the textureless surface. Second, to incorporate part information into texture reconstruction, we introduce a part-guided texturing module (PartTexturer), which acquires prior knowledge from a pre-trained image generation network on texture alignment of human parts. Extensive experiments demonstrate that our framework achieves state-of-the-art quality in 3D human reconstruction.

Introducing PARTE

Algorithm description of PARTE
PARTE initially reconstructs textureless human mesh from a single image and textures it based on two core modules. PartSegmenter predicts 3D human parts of the textureless human mesh by incorporating information from the input image and normal maps of the textureless human surface. Based on the 3D human part segmentations, PartTexturer reconstructs the human textures using PartDiffusion network that infers plausible human appearance corresponding to the human parts.

PartSegmenter

Algorithm description of PARTE
PartSegmenter performs normal rendering and acquires the part segments from a textureless human surface. The part segments are used for voting 3D human part labels on the human surface.

PartTexturer

Algorithm description of PARTE
PartTexturer integrates three key sources of information: the front-view image, textprompts, and part segments of the rendering view, to infer the texture for invisible regions.

3D segmentation results of PartSegmenter

Our proposed PartSegmenter accurately performs 3D part segmentation even when given only a front-view image under noisy geometry.

Image generation results of PartDiffusion (PartTexturer)

Our proposed PartDiffusion, a core component of PartTexturer, generates human images from novel views while preserving the input appearance.

BibTeX

@article{nam2025parte,
      author    = {Nam, Hyeongjin and Kim, Donghwan and Moon, Gyeongsik and Lee, Kyoung Mu},
      title     = {{PARTE}: Part-Guided Texturing for 3D Human Reconstruction from a Single Image},
      journal   = {ICCV},
      year      = {2025},
    }