Grounded 3D-LLM with Referent Tokens

Yilun Chen1*, Shuai Yang1,2*, Haifeng Huang1,2*, Tai Wang1, Ruiyuan Lyu1,
Runsen Xu3, Dahua Lin1,3, Jiangmiao Pang1†
1Shanghai AI Laboratory, 2Zhejiang University, 3The Chinese University of Hong Kong
*Indicates Equal Contribution,Indicates Corresponding Author

Abstract

Prior studies on 3D scene understanding have primarily developed specialized models for specific tasks or required task-specific fine-tuning. In this study, we propose Grounded 3D-LLM, which explores the potential of 3D large multi-modal models (3D LMMs) to consolidate various 3D vision tasks within a unified generative framework. The model uses scene referent tokens as special noun phrases to reference 3D scenes, enabling the handling of sequences that interleave 3D and textual data. It offers a natural approach for translating 3D vision tasks into language formats using task-specific instruction templates. To facilitate the use of referent tokens in subsequent language modeling, we have curated large-scale grounded language datasets that offer finer scene-text correspondence at the phrase level by bootstrapping existing object labels. Subsequently, we introduced Contrastive LAnguage-Scene Pre-training (CLASP) to effectively leverage this data, thereby integrating 3D vision with language models. Our comprehensive evaluation covers open-ended tasks like dense captioning and 3D QA, alongside close-ended tasks such as object detection and language grounding. Experiments across multiple 3D benchmarks reveal the leading performance and the broad applicability of Grounded 3D-LLM.

Generalist outputs

Contributions

  • We developed Grounded 3D-LLM, which first establishes a correspondence between 3D scenes and language phrases through referent tokens. This method enhances scene referencing and effectively supports 3D vision tasks in language modeling, including single- and multi-object grounding, along with introducing 3D detection for the first time.
  • We developed a carefully crafted automated 3D scene caption dataset curation pipeline that provides finer correspondence at the phrase level. Experiments using CLASP in both supervised and zero-shot text settings demonstrate the effectiveness of pre-training on this data for phrase-level scene-text alignment.
  • The Grounded 3D-LLM model tackles 3D grounding and language tasks generatively without the need for specialized models. It achieves top-tier performance in most downstream tasks among generative models, particularly in grounding problems, without task-specific fine-tuning.

Method

The training process for Grounded 3D-LLM is divided into two key steps. Firstly, CLASP utilizes extensive scene-text annotations (at the phrase level) to pre-train a 3D point cloud encoder and a cross-modal interactor. The subsequent step involves multi-task instruction tuning, which interlaces referent tokens within the instructions and responses, thereby facilitating flexible 3D scene understanding tasks.

Our Framework

Results Visualization

Please select a task.

Grounded Scene Caption Data

We propose an automated grounded language dataset generation process utilizing ChatGPT and 2D vision-language models to create the Grounded Scene Caption dataset (G-SceneCap):

  • Step 1: Bootstrapping object captions with GT label correction. Using 3D real-scan datasets, we annotate each object with the vision-language model CogVLM, using the images of the largest visible areas. Inconsistent annotations are rectified using raw instance labels.
  • Step 2: Condensing objects in local scenes into a caption. For each enumerated anchor object, we form an initial object set by randomly selecting a group of nearby objects. Their captions and coordinates (x,y,z) are input into GPT-4 for captioning, which requires referencing objects by their IDs in the format ``[object_phrase object_ID]'' in the caption.
  • Step 3: Adding Rule-Based Relations into Captions. To enrich scene captions, we integrate program-generated spatial relationships from Sr3D. By selecting an anchor object from the set in step 2, we apply the spatial relation rules (e.g., between, supporting, nearest, back) to include related objects. GPT-4 then combines these relationships into the prior caption from step 2.

Pipeline of Grounded Scene Dataset Curation

Example Visualization of Grounded Scene Caption Dataset

BibTeX

@article{chen2024grounded,
      title={Grounded 3D-LLM with Referent Tokens}, 
      author={Chen, Yilun and Yang, Shuai and Huang, Haifeng and Wang, Tai and Lyu, Ruiyuan and Xu, Runsen and Lin, Dahua and Pang, Jiangmiao},
      journal={arXiv preprint arXiv:2405.10370},
      year={2024},
}