Create a 3D Avatar from a Single Image: LAM Guide (2025)
Ever dreamed of turning a simple photo into a realistic, animated 3D avatar? The Large Avatar Model (LAM) makes it a reality. This guide will walk you through everything you need to know about LAM, from its core features to getting started with your own 3D avatar creation. This article focuses on how to use LAM for one-shot avatar generation, creating an animatable Gaussian head from a single image.
What is LAM and Why Should You Care?
LAM is designed for rapid creation of ultra-realistic 3D avatars from a single image. It offers features such as cross-platform animation and rendering, and low-latency SDK for realtime interactive chatting.
Here's why LAM is a game-changer:
- Speed: Creates 3D avatars in seconds.
- Realism: Delivers high-quality, lifelike avatar representations.
- Accessibility: Works on various devices thanks to cross-platform animation.
Key Features: 3D Avatar Creation, Animation, and Chatting
LAM is more than just a 3D avatar creator; it's a complete solution for interactive avatar experiences.
- One-Shot Avatar Creation: Generates a 3D avatar from a single image rapidly.
- Cross-Platform Animation: Animate and render the avatar on any device.
- Interactive Chatting: Low-latency SDK for real-time communication with your 3D avatar. The OpenAvatarChat is included, supporting LLM, ASR, and TTS.
- Audio2Expression: Animate the avatar with audio input.
Getting Started: Installation and Setup
Ready to dive in and create your own 3D avatar? Follow these steps to set up LAM on your system:
- Clone the Repository:
- Install Dependencies: Choose the appropriate CUDA version (12.1 or 11.8).
Downloading Model Weights: Hugging Face and ModelScope
LAM's performance relies on pre-trained model weights. You can download these weights from either Hugging Face or ModelScope. Here are instructions for both! Specifically you will need assets to begin the process of 3D portrait generation.
Hugging Face Download
ModelScope Download
Running Inference: Bring Your Avatar to Life
Once you've set up the environment and downloaded the model weights, you're ready to generate your 3D avatar!
Use the following command for inference:
Remember to replace ${CONFIG}
, ${MODEL_NAME}
, ${IMAGE_PATH_OR_FOLDER}
, and ${MOTION_SEQ}
with your specific configurations and file paths.
The Future of Avatars: What's Next for LAM?
The LAM team is constantly working on improvements and new features. Here's a sneak peek at what's coming:
- Release LAM-small trained on VFHQ and Nersemble.
- Release Huggingface space.
- Release Modelscope space.
- Release LAM-large trained on a self-constructed large dataset.
- Release WebGL Render for cross-platform animation and rendering.
- Release audio driven model: Audio2Expression.
- Release Interactive Chatting Avatar SDK with OpenAvatarChat, including LLM, ASR, TTS, Avatar.
Contributing and Acknowledgment
LAM is built upon the work of many researchers and open-source projects. The LAM project gratefully acknowledges the contributions of:
- OpenLRM
- GAGAvatar
- GaussianAvatars
- VHAP
Citation
If you use LAM in your research, please cite the following paper:
@inproceedings{he2025LAM,
title={LAM: Large Avatar Model for One-shot Animatable Gaussian Head},
author={
Yisheng He and Xiaodong Gu and Xiaodan Ye and Chao Xu and Zhengyi Zhao and Yuan Dong and Weihao Yuan and Zilong Dong and Liefeng Bo
},
booktitle={arXiv preprint arXiv:2502.17796},
year={2025}
}