Giter Site home page Giter Site logo

clip-forge's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clip-forge's Issues

Pre-training weights

  1. Could you share the pre-training weights provided for Stage 1 ?

  2. Are there any plans to release point cloud code?

Repeated text on "conda evn create". Perhaps add pip dependency in the environment.yaml file.

On Ubuntu Server 20, after typing

conda env create -f environment.yaml

I get the following output:
Warning: you have pip-installed dependencies in your environment file, but you do not list pip itself as one of your conda dependencies. Conda may not use the correct pip to install your packages, and they may end up in the wrong place. Please add an explicit pip dependency. I'm adding one for you, but still nagging you. Warning: you have pip-installed dependencies in your environment file, but you do not list pip itself as one of your conda dependencies. Conda may not use the correct pip to install your packages, and they may end up in the wrong place. Please add an explicit pip dependency. I'm adding one for you, but still nagging you.

The messages above repeats until exited via CTRL-C.

I was able to fix this by adding "- pip==21.0.1" to the environment file after reading about the issue here: conda/conda#10614.

Thank you.

Unable to reproduce the training results of autoencoder

Hi, I'm trying to reproduce the results of Clip-Forge myself by training from scratch. I trained the autoencoder on the ShapeNet data downloaded from the repository of occupancy-networks, but got unsatisfactory results compared to the pretrained model. I did not change the hyperparameters except that I changed the batch_size from 32 to 256 to better fit into the GPU memory (I think this should not harm the performance but improve it). So I'm wondering if you used the same default hyperparameters to train the autoencoder, or you used some special training tricks? And do you have any idea to improve the performance of the autoencoder, since it's crucial to the final shape generation ability?

Here are some visualizations to show the differences in reconstruction results on the training set.

Pretrained autoencoder:
image
image
image
image

Training from scratch:
image
image
image
image

FID Calculations failing

I'm trying to reproduce the Frechet Inception Distance metrics for the trained CLIP-Forge model and am not able to.

When I provide the pre-trained weights as input to test_post_clip.py the calculate_frechet_distance method in fid_cal.py produces extremely large non-real components in the matrix square root procedure (even after adding values along the diagonal entries). As a result, it throws the following ValueError: Imaginary component 3.166298900864714e+272

Do you know what could be causing this error and what I could do to fix it?

Plz upload point cloud code!

Hi, I want to reproduce your code based on the point cloud dataset. In your Readme, you will upload the point cloud code soon, but I cannot find it. Please upload the point cloud code. Thanks :)

rendering script

Congratulations, it's a wonderful work. As you mentioned

I believe a future work direction would involve improving the quality of the shapes generated.

So could you provide your rendering script to reproduce the visualization results in the teaser?

How to get 3d model from the output?

Currently, the inference code only outputs an image rendering of the generated voxels. Is it possible to get the 3d model file as the output in .obj/.gltf format?

Something error in dataloader multiprocessing

Hi,
i try to re-train the model from scratch, but after the first epoch, there is an AssertionError. But the program still can execute the next epoch, i wanna ask is that a normal phenomenon or influence the final results?
屏幕截图(15)

Checkpoints for Point Cloud

Thanks for your excellent work!
Could you please share the pre-trained weights for point clouds, including autoencoder and nf?
Thanks! : )

the point cloud code will be released soon

Thanks for your wonderful work. I noticed that you wrote in readme: the point cloud code will be released soon. Does it mean that there is a version code that generates point cloud as the result?

point cloud code will be released soon?

Thanks for your wonderful work. I noticed that you wrote in readme: the point cloud code will be released soon. Does it mean that there is a version code that generates point cloud as the result?

add web demo/model to Huggingface

Hi, would you be interested in adding Clip-Forge to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. Models/datasets/spaces(web demos) can be added to a user account or organization similar to github.

Example spaces with repos:
github: https://github.com/salesforce/BLIP
Spaces: https://huggingface.co/spaces/salesforce/BLIP

github: https://github.com/facebookresearch/omnivore
Spaces: https://huggingface.co/spaces/akhaliq/omnivore

Example from other organizations:
Keras: https://huggingface.co/keras-io
Microsoft: https://huggingface.co/microsoft

and here are guides for adding spaces/models/datasets to your org

How to add a Space: https://huggingface.co/blog/gradio-spaces
how to add models: https://huggingface.co/docs/hub/adding-a-model
uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

utils.visualization.multiple_plot_voxel raise ValueError

Hi, thanks for the brilliant works.

I met errors while both training steps. such as

Traceback (most recent call last):
  File "train_post_clip.py", line 366, in <module>
    main()          
  File "train_post_clip.py", line 350, in main
    generate_on_query_text(args, clip_model, net, latent_flow_network)
  File "train_post_clip.py", line 182, in generate_on_query_text
    visualization.multiple_plot_voxel(voxels_out, save_loc=save_loc +"{}_text_query.png".format(text_in))
  File "/root/Clip-Forge/utils/visualization.py", line 212, in multiple_plot_voxel
    ax = fig.add_subplot(plt_num, projection=Axes3D.name)
  File "/opt/conda/lib/python3.7/site-packages/matplotlib/figure.py", line 772, in add_subplot
    ax = subplot_class_factory(projection_class)(self, *args, **pkw)
  File "/opt/conda/lib/python3.7/site-packages/matplotlib/axes/_subplots.py", line 36, in __init__
    self.set_subplotspec(SubplotSpec._from_subplot_args(fig, args))
  File "/opt/conda/lib/python3.7/site-packages/matplotlib/gridspec.py", line 583, in _from_subplot_args
    f"Single argument to subplot must be a three-digit "
ValueError: Single argument to subplot must be a three-digit integer, not '131'

This occurs after first epoch (for autoencoder).

I located the vis functions line, noted it by sharp symbol and the training worked well.

Since I reviewed the code, I found the code in utils.visualization.multiple_plot_voxel (line 213 in file ./utils/visualization.py)
passed a str type value to fig.add_subplot function:

        plt_num = "1" + str(len(batch_data_points)) +str(i+1)
        ax = fig.add_subplot(plt_num, projection=Axes3D.name)

Based on Api Doc of Matplotlib , an int value shall be passed into this function instead of a str value.

Please check if this function works well or error orrcured due to lib version.

conda env create -f environment.yaml

When I run the command on the right(conda env create -f environment.yaml), I get an error:
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • pyembree=0.1.4
    Do any friends know what's going on? Thank you.

Why do you visualize the results with voxel grids?

Congratulations on this fancy work! But I'm wondering why you visualize the generated shape with voxel grids instead of the mesh in the paper. Since Clip-forge adopts an implicit shape decoder, I think it's convenient to extract the iso-surface.

Does the code support training in multi-GPU?

I set the argument --gpu "0" "1" "2", and got RuntimeError: Invalid device string: 'cuda:0,1,2'
I want to know dose the code support training in multi-GPU, if yes, what need I do?

Dataset not downloading train.lst or associated files

2022-06-19 12:44:31,178 - INFO - #############################
{'__pycache__': 0}
{'__pycache__': {'id': '__pycache__', 'name': 'n/a'}}
Traceback (most recent call last):
  File "train_autoencoder.py", line 383, in <module>
    main()   
  File "train_autoencoder.py", line 316, in main
    train_dataloader, total_shapes  = get_dataloader(args, split="train")
  File "train_autoencoder.py", line 105, in get_dataloader
    dataset = shapenet_dataset.Shapes3dDataset(args.dataset_path, fields, split=split,
  File "/home/yboutros/repos/MeshGen/zeroshot/AudodeskAILab/Clip-Forge/dataset/shapenet_dataset.py", line 409, in __init__
    with open(split_file, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'dataset/__pycache__/train.lst'

So, I should probably be doing this in Conda, if this continues to fail ill just set that up, but what kind of data is train.lst looking for? Looking at other repos it seems train.lst is justa txt file associating input datapaths with output datapaths. I downloaded the exps folder in the usage but all the subfolders are empty

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.