Visit our demo site for a real-time view!
For more details, check out the GitHub repository.
In the background, a mesh (in the .vrm format) runs within GVRM,
and the poses of the splats follow the vertices of the mesh. That's all! :)
Technical note for expert readers:
We implemented the core logic in GLSL to achieve high performance. However, as you know, depth sorting of splats is the most computationally expensive part. Updating the positions of all splats every frame for the sorter process would cause excessive memory I/O, making real-time rendering nearly impossible. So, we divide the splat scene into 14 sub-scenes and use the static positions within each sub-scene for sorting. In other words, we do not use the skinned results for sorting, but we do use the correct skinned results for rendering. This approach allows us to achieve a realistic computational cost while maintaining visual accuracy, even in WebGL. ✌
This work was supported by the Ochiai Pavilion at the Osaka/Kansai Expo 2025.
This work was supported by JSPS KAKENHI Grant Number 23KJ0284.
@misc{kondo2025instantskinnedgaussianavatars,
title={Instant Skinned Gaussian Avatars for Web, Mobile and VR Applications},
author={Naruya Kondo and Yuto Asano and Yoichi Ochiai},
year={2025},
eprint={2510.13978},
archivePrefix={arXiv},
primaryClass={cs.CG},
url={https://arxiv.org/abs/2510.13978},
}