Fine-grained spatially varying material selection in images

SIGGRAPH Asia 2025 (journal)

Our proposed method allows fine-grained material selection in images on two different levels of granularity, significantly outperforming previous work (Materialistic [Sharma et al. 2023]) in selection accuracy and consistency. We show here results on challenging examples due to specular reflections (top left) and fine patterns outside the training data (top right, bottom left). Selection masks are shown as green image overlays. The bottom right row shows material editing results using our predicted two-level selection masks, with the masks shown as insets.

Abstract

Selection is the first step in many image editing processes, enabling faster and simpler modifications of all pixels sharing a common modality. In this work, we present a method for material selection in images, robust to lighting and reflectance variations, which can be used for downstream editing tasks. We rely on vision transformer (ViT) models and leverage their features for selection, proposing a multi-resolution processing strategy that yields finer and more stable selection results than prior methods. Furthermore, we enable selection at two levels: texture and subtexture, leveraging a new two-level material selection (DuMaS) dataset which includes dense annotations for over 800,000 synthetic images, both on the texture and subtexture levels.

Downloads and links

BibTeX reference

@article{GuerreroViu:2025:MaterialSelection,
  author = {Julia Guerrero-Viu and Michael Fischer and Iliyan Georgiev and Elena Garces and Diego Gutierrez and Belen Masia and Valentin Deschaintre},
  title = {Fine-grained spatially varying material selection in images},
  journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)},
  year = {2025},
  volume = {44},
  number = {6},
  doi = {10.1145/3763332}
}