GaussianCross: Cross-modal Self-supervised 3D Representation Learning via Gaussian Splatting
GaussianCross is a novel cross-modal self-supervised 3D representation learning architecture that integrates feed-forward 3D Gaussian Splatting (3DGS) techniques. It aims to generate informative and robust point representations for 3D scene understanding, demonstrating strong performance on tasks like semantic and instance segmentation.

Abstract
The significance of informative and robust point representations has been widely acknowledged for 3D scene understanding. Despite existing self-supervised pre-training counterparts demonstrating promising performance, the model collapse and structural information deficiency remain prevalent due to insufficient point discrimination difficulty, yielding unreliable expressions and suboptimal performance. In this paper, we present GaussianCross, a novel cross-modal self-supervised 3D representation learning architecture integrating feed-forward 3D Gaussian Splatting (3DGS) techniques to address current challenges. GaussianCross seamlessly converts scale-inconsistent 3D point clouds into a unified cuboid-normalized Gaussian representation without missing details, enabling stable and generalizable pre-training. Subsequently, a tri-attribute adaptive distillation splatting module is incorporated to construct a 3D feature field, facilitating synergetic feature capturing of appearance, geometry, and semantic cues to maintain cross-modal consistency. To validate GaussianCross, we perform extensive evaluations on various benchmarks, including ScanNet, ScanNet200, and S3DIS. In particular, GaussianCross shows a prominent parameter and data efficiency, achieving superior performance through linear probing (<0.1% parameters) and limited data training (1% of scenes) compared to state-of-the-art methods. Furthermore, GaussianCross demonstrates strong generalization capabilities, improving the full fine-tuning accuracy by 9.3% mIoU and 6.1% AP$_{50}$ on ScanNet200 semantic and instance segmentation tasks, respectively, supporting the effectiveness of our approach.
Pipeline

Installation
Our model is built on the Pointcept toolkit. You can follow its official instructions to install the packages:
conda create -n GaussianCross python=3.8 -y
conda activate GaussianCross
# Further installation steps can be found in the Pointcept documentation or the GaussianCross GitHub repository.
# Example from Pointcept's README:
# pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 --index-url https://download.pytorch.org/whl/cu118
# pip install -r requirements.txt
# python setup.py develop
Note that they also provide scripts to build corresponding docker image: build_image.sh
Data Preprocessing
ScanNet V2 & ScanNet200
- Download the ScanNet V2 dataset.
- Run preprocessing code for raw ScanNet as follows (detailed scripts are in the GitHub repository):
# xxx (Refer to GitHub for specific commands, e.g., python tools/prepare_scannet.py)
- Link processed dataset to codebase:
# PROCESSED_SCANNET_DIR: the directory of the processed ScanNet dataset.
mkdir data
ln -s ${PROCESSED_SCANNET_DIR} ${CODEBASE_DIR}/data/scannet
S3DIS We use the preprocessed S3DIS data from Pointcept.
- Link processed dataset to codebase:
# PROCESSED_S3DIS_DIR: the directory of the processed S3DIS dataset.
ln -s ${PROCESSED_S3DIS_DIR} ${CODEBASE_DIR}/data/s3dis
Usage (Training with Pretrained Weights)
The training process is based on configs in the configs
folder of the GitHub repository. The training scripts will create an experiment folder in exp
and backup essential code in the experiment folder. Training config, log file, tensorboard, and checkpoints will also be saved during the training process.
Attention: Note that a critical difference from Pointcept is that most of data augmentation operations are conducted on GPU in this file. Make sure ToTensor
is before the augmentation operations.
Download the pretrained 3D backbone from this Hugging Face repository.
ScanNet V2 Examples
# Load the pretrained model
WEIGHT="path/to/downloaded/model/model_last.pth"
# Linear Probing
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-base-lin -n semseg-spunet-base-lin -w $WEIGHT
# Semantic Segmentation
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-base -n semseg-spunet-base -w $WEIGHT
# Instance Segmentation
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/train.sh -g 4 -d scannet -c insseg-pg-spunet-base -n insseg-pg-spunet-base -w $WEIGHT
# Parameter Efficiency and Data Efficiency
CUDA_VISIBLE_DEVICES=0,1,2,3 sh scripts/train.sh -g 4 -d scannet -c semseg-spunet-efficient-[la20-lr20] -n semseg-spunet-efficient-[la20-lr20] -w $WEIGHT
For more detailed training scripts and configurations for ScanNet200 and S3DIS, please refer to the official GitHub repository.
Acknowledgement
The research work was conducted in the JC STEM Lab of Machine Learning and Computer Vision funded by The Hong Kong Jockey Club Charities Trust.
Our code is primarily built upon Pointcept, Ponder V2 and gsplat.
Citation
If you find our work helpful or inspiring, please feel free to cite it.
@article{yao2025gaussiancross,
title={GaussianCross: Cross-modal Self-supervised 3D Representation Learning via Gaussian Splatting},
author={Yao, Lei and Wang, Yi and Zhang, Yi and Liu, Moyun and Chau, Lap-Pui},
journal={arXiv preprint arXiv:2508.02172},
year={2025}
}
or
@inproceedings{yao2025gaussiancross,
title={GaussianCross: Cross-modal Self-supervised 3D Representation Learning via Gaussian Splatting},
author={Yao, Lei and Wang, Yi and Zhang, Yi and Liu, Moyun and Chau, Lap-Pui},
booktitle={Proceedings of the 33nd ACM International Conference on Multimedia},
year={2025}
}