Fine-grained spatially varying material selection in images
Abstract
Selection is the first step in many image editing processes, enabling faster and simpler modifications of all pixels sharing a common modality. In this work, we present a method for material selection in images, robust to lighting and reflectance variations, which can be used for downstream editing tasks. We rely on vision transformer (ViT) models and leverage their features for selection, proposing a multi-resolution processing strategy that yields finer and more stable selection results than prior methods. Furthermore, we enable selection at two levels: texture and subtexture, leveraging a new two-level material selection (DuMaS) dataset which includes dense annotations for over 800,000 synthetic images, both on the texture and subtexture levels.
Downloads and links
- paper (PDF, 12 MB)
- supplemental document (PDF, 29 MB)
- Arxiv preprint
- project page
- citation (BIB)
BibTeX reference
@article{GuerreroViu:2025:MaterialSelection,
author = {Julia Guerrero-Viu and Michael Fischer and Iliyan Georgiev and Elena Garces and Diego Gutierrez and Belen Masia and Valentin Deschaintre},
title = {Fine-grained spatially varying material selection in images},
journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)},
year = {2025},
volume = {44},
number = {6},
doi = {10.1145/3763332}
}
