Docker container for Shap-e (https://github.com/openai/shap-e)
-
Build an image
docker build -t shap-e .
-
Create container, point D:\mount to the model output directory (change D:\mount to your needs)
docker run -v D:\mount://usr/src/shap-e/shap_e/output --gpus=all -dit --name shap-e shap-e
-
Alternatively, no mounting of local folder to the output folder
docker run --gpus=all -dit --name shap-e shap-e
-
Log into docker container
docker exec -it shap-e "/bin/bash"
-
Run generator (while in container)
python text-to-glb.py
This is the official code and model release for Shap-E: Generating Conditional 3D Implicit Functions.
- See Usage for guidance on how to use this repository.
- See Samples for examples of what our text-conditional model can generate.
Here are some highlighted samples from our text-conditional model. For random samples on selected prompts, see samples.md.
![]() |
![]() |
![]() |
A chair that looks like an avocado |
An airplane that looks like a banana |
A spaceship |
![]() |
![]() |
![]() |
A birthday cupcake | A chair that looks like a tree |
A green boot |
![]() |
![]() |
![]() |
A penguin | Ube ice cream cone | A bowl of vegetables |
Install with pip install -e .
.
To get started with examples, see the following notebooks:
- sample_text_to_3d.ipynb - sample a 3D model, conditioned on a text prompt.
- sample_image_to_3d.ipynb - sample a 3D model, conditioned on a synthetic view image. To get the best result, you should remove background from the input image.
- encode_model.ipynb - loads a 3D model or a trimesh, creates a batch of multiview renders and a point cloud, encodes them into a latent, and renders it back. For this to work, install Blender version 3.3.1 or higher, and set the environment variable
BLENDER_PATH
to the path of the Blender executable.