Giter Site home page Giter Site logo

preddy5 / im2vec Goto Github PK

View Code? Open in Web Editor NEW
270.0 270.0 42.0 186.2 MB

[CVPR 2021 Oral] Im2Vec Synthesizing Vector Graphics without Vector Supervision

Home Page: http://geometry.cs.ucl.ac.uk/projects/2021/im2vec/

License: Apache License 2.0

Jupyter Notebook 12.53% Python 86.01% Shell 1.46%
computer-graphics computer-vision cvpr2021 vector-graphics

im2vec's People

Contributors

preddy5 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

im2vec's Issues

No module named 'pydiffvg'

i have some trouble

pip install pydiffvg

ERROR: Could not find a version that satisfies the requirement pydiffvg (from versions: none)

ERROR: No matching distribution found for pydiffvg

Chamfer distance code

Hello there, thanks for the nice work! Could you point me to the code/script you have used when computing the reported Chamfer distance numbers in the supplementary. Thanks!

problems when training

I run :CUDA_VISIBLE_DEVICES=1 python run.py -c configs/emoji.yaml
Traceback (most recent call last):
File "run.py", line 116, in
runner.fit(experiment)
File "...lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
self._run(model)
File ".../lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run
self._dispatch()
File ".../lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch
self.accelerator.start_training(self)
File ".../lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
self.training_type_plugin.start_training(trainer)
File ".../lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
self._results = trainer.run_stage()
File ".../lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage
return self._run_train()
File ".../lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_train
self._run_sanity_check(self.lightning_module)
File ".../lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1111, in _run_sanity_check
self._evaluation_loop.reload_evaluation_dataloaders()
File ".../lib/python3.7/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 173, in reload_evaluation_dataloaders
self.trainer.reset_val_dataloader(model)
File ".../lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 437, in reset_val_dataloader
self.num_val_batches, self.val_dataloaders = self._reset_eval_dataloader(model, "val")
File ".../lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 398, in _reset_eval_dataloader
num_batches = len(dataloader) if has_len(dataloader) else float("inf")
File ".../lib/python3.7/site-packages/pytorch_lightning/utilities/data.py", line 63, in has_len
raise ValueError("Dataloader returned 0 length. Please make sure that it returns at least 1 batch")
ValueError: Dataloader returned 0 length. Please make sure that it returns at least 1 batch

anyone training successfully can tell me how to fix it?

About training details

Dear author:
Regarding the training details, I have the following questions?

  1. How many image data are used to train emoji? I saw only 13 images in data/emoji/train and logs in the training. Will you also train with these data?
  2. Is the training process completely end-to-end? And do I need to set other configurations in the yaml file in config in order to train with better results? Did you follow the training in the yaml file?
  3. I run CUDA_VISIBLE_DEVICES=1 python run.py -c configs/emoji.yaml, and I have trained 2100 epochs so far, and the loss does not seem to decrease practically.
Epoch 02177: val_loss  was not in top 1
Epoch 2178: 100%|█| 13/13 [00:11<00:00,  1.10it/s, loss=3.677, v_num=110, Reconstruction_Loss=3.67, KLD=0, aux_loss=0, other losses=0beta:  0.0
learning rate:  0.0001

Epoch 02178: val_loss  was not in top 1
Epoch 2179: 100%|█| 13/13 [00:13<00:00,  1.03s/it, loss=3.731, v_num=110, Reconstruction_Loss=3.59, KLD=0, aux_loss=0, other losses=0beta:  0.0
learning rate:  0.0001

Epoch 02179: val_loss  was not in top 1
Epoch 2180: 100%|█| 13/13 [00:13<00:00,  1.01s/it, loss=3.689, v_num=110, Reconstruction_Loss=3.56, KLD=0, aux_loss=0, other losses=0beta:  0.0
learning rate:  0.0001

Epoch 02180: val_loss  was not in top 1
Epoch 2181: 100%|█| 13/13 [00:12<00:00,  1.00it/s, loss=3.693, v_num=110, Reconstruction_Loss=3.33, KLD=0, aux_loss=0, other losses=0beta:  0.0
learning rate:  0.0001

image
reconstruction
image

Is this result normal? Is this due to the small training data set? Or yaml file configuration?

ModuleNotFoundError: No module named 'pydiffvg'

This is an interesting project,How to install pydiffvg?
import pydiffvg
ModuleNotFoundError: No module named 'pydiffvg'

python3 -m pip install diffvg
ERROR: Could not find a version that satisfies the requirement diffvg
ERROR: No matching distribution found for diffvg

```shell
python3 -m pip install pydiffvg
ERROR: Could not find a version that satisfies the requirement pydiffvg
ERROR: No matching distribution found for pydiffvg

Can't install dependencies

Hi there!

I'm trying to install requirements for this project to run it:
pip3 install -r requirements.txt

But get the following error message:

ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/private/tmp/build/80754af9/idna_1593446292537/work'

It seems like you're using your local builds as a dependency. Could you please help me get rid of it and still be able to run the code?

Thanks for great paper!

KeyError: 'z_layers

(Im2Vec) dyf-ai@dyfai-b450-aorus-m:/media/dyf-ai/Code/src/projectX/Im2Vec/logs/VectorVAEnLayers/version_110$ CUDA_VISIBLE_DEVICES=0 python eval_local.py -c configs/emoji.yaml

(Im2Vec) dyf-ai@dyfai-b450-aorus-m:/media/dyf-ai/Code/src/projectX/Im2Vec/logs/VectorVAEnLayers/version_110$ CUDA_VISIBLE_DEVICES=0 python eval_local.py -c configs/emoji.yaml
/media/dyf-ai/Code/src/projectX/Im2Vec /media/dyf-ai/Code/src/projectX/Im2Vec/./data/emoji/
Using Differential Compositing
loading:  epoch=667.ckpt
/home/dyf-ai/.local/lib/python3.8/site-packages/torch/nn/functional.py:1709: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
  warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
Traceback (most recent call last):
  File "eval_local.py", line 54, in <module>
    experiment.sample_interpolate(save_dir=config['logging_params']['save_dir'], name=config['logging_params']['name'],
  File "/media/dyf-ai/Code/src/projectX/Im2Vec/logs/VectorVAEnLayers/version_110/experiment.py", line 214, in sample_interpolate
    interpolate_samples = self.model.naive_vector_interpolate(test_input, verbose=False)
  File "/media/dyf-ai/Code/src/projectX/Im2Vec/logs/VectorVAEnLayers/version_110/models/vector_vae_nlayers.py", line 247, in naive_vector_interpolate
    output = self.composite_fn(layers = layers)
  File "/media/dyf-ai/Code/src/projectX/Im2Vec/logs/VectorVAEnLayers/version_110/models/vector_vae_nlayers.py", line 94, in soft_composite
    z_layers = kwargs['z_layers']
KeyError: 'z_layers'

Some errors occurred ...

No module named 'pydiffvg'

Hi. when I run the code, It occured to this problem:
Traceback (most recent call last): File "eval_local.py", line 6, in <module> from models import * File "/raid/home/zhujingjie/Tools/Im2Vec-master/logs/VectorVAEnLayers/version_110/models/__init__.py", line 3, in <module> from .vector_vae import VectorVAE File "/raid/home/zhujingjie/Tools/Im2Vec-master/logs/VectorVAEnLayers/version_110/models/vector_vae.py", line 10, in <module> import pydiffvg

ModuleNotFoundError: No module named 'pydiffvg'

I did a lot of research, but I couldn't find this 'pydiffvg' library, Could you tell me how to install it?
thx!!

training on real images?

Has anyone tried training this on a dateset of real images, maybe something on the level of CIFAR-10?

Curious what kind of results to expect.

How to save in .svg files?

Hello, I would like to ask how to save a file in .svg format. There seems to be an error in line 281 in vector_vae_nlayers.py,

(1) I change expriment.py in line 252
if save_svg:
self.model.save(test_input, save_dir, name)

and run command : CUDA_VISIBLE_DEVICES=1 python3 eval_local.py -c configs/emoji.yaml

UnboundLocalError: local variable 'color' referenced before assignment

Segmentation tool

Hi,

Could you please provide a link to the off-the-shelf segmentation tool (for "Depixelizing Pixel Art") that you have used for generating your clustered samples, so that we can cluster our dataset before using it to train im2vec? Will im2vec work without these clustered samples?

thanks

pip install -r requirements.txt. ERRORS

This is such an amazing project,But I got errors after using the command pip install -r requirements.txt.
How can I install idna @ file:///tmp/build/80754af9/idna_1593446292537/work,requests @ file:///tmp/build/80754af9/requests_1592841827918/work,diffvg==0.0.1,emd-ext==0.0.0,pointnet2==0.0.0
pointnet2-ops==3.0.0,pytorch-points==0.9? My CUDA version is 11.1,python 3.6.13,ubantu 18.04.
I hope to get your reply, thanks!!!

The difference between VectorVAE and VectorVAEnLayers

Hi Pradyumna,

The approach you proposed is very well-designed and it takes me effort to understand.

Could you give a high-level description of the difference between VectorVAE and VectorVAEnLayers? Thank you very much!

Best,
PH

Default config (emoji.yaml) fails to converge

Hey, thanks so much for the code release. Really excited about the potential of this project.

I think the default config might have some incorrect hyper parameters? After 300-400 epochs I'm still not seeing the model able to reconstruct the emoji dataset.

Any guidance on what parameters to use would be appreciated. I tried what was suggested in #5 but did not see an improvement.

recons_VectorVAEnLayers_0383

Training example from README wont work

Hi!

I'm trying to run training via python3 run.py -c configs/emoji.yaml, like in readme.

However I get the following error:

/home/jupyter/.local/lib/python3.7/site-packages/pytorch_lightning/loggers/test_tube.py:105: LightningDeprecationWarning: The TestTubeLogger is deprecated since v1.5 and will be removed in v1.7. We recommend switching to the `pytorch_lightning.loggers.TensorBoardLogger` as an alternative.
  "The TestTubeLogger is deprecated since v1.5 and will be removed in v1.7. We recommend switching to the"
logs//VectorVAEnLayers/version_110/
{'name': 'VectorVAEnLayers', 'in_channels': 3, 'latent_dim': 128, 'loss_fn': 'MSE', 'paths': 20, 'beta': 0, 'radius': 3, 'scale_factor': 1, 'learn_sampling': False, 'only_auxillary_training': False, 'memory_leak_training': False, 'other_losses_weight': 0, 'composite_fn': 'soft'}
Using Differential Compositing
{'dataset': 'irrelavant', 'data_path': './data/emoji/', 'img_size': 128, 'batch_size': 4, 'val_batch_size': 8, 'val_shuffle': True, 'LR': 0.0005, 'weight_decay': 0.0, 'scheduler_gamma': 0.95, 'grow': True} logs/VectorVAEnLayers
Traceback (most recent call last):
  File "run.py", line 100, in <module>
    **config['trainer_params'])
  File "/home/jupyter/.local/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/env_vars_connector.py", line 38, in insert_env_defaults
    return fn(self, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'log_save_interval'

How to train with celeba dataset?

Hi,

I noticed that there is a celeba dataset train interface in dataloader. How do we train with this dataset? Do I need to change the self.colors in VectorVAEnLayers class?

Thank you.

MisconfigurationException in Training Example

Hello there,

When I tried to run the training example, I got the following error:

Traceback (most recent call last):
File "run.py", line 103, in
runner.fit(experiment)
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit
self._call_and_handle_interrupt(
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 723, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1236, in _run
results = self._run_stage()
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1323, in _run_stage
return self._run_train()
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1353, in _run_train
self.fit_loop.run()
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 266, in advance
self._outputs = self.epoch_loop.run(self._data_fetcher)
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 208, in advance
batch_output = self.batch_loop.run(batch, batch_idx)
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx)
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 203, in advance
result = self._run_optimization(
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 258, in _run_optimization
result = closure.consume_result()
File "/home/eric/Projects/Im2Vec/venv3.8/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/closure.py", line 51, in consume_result
raise MisconfigurationException(
pytorch_lightning.utilities.exceptions.MisconfigurationException: The closure hasn't been executed. HINT: did you call optimizer_closure() in your optimizer_step hook? It could also happen because the optimizer.step(optimizer_closure) call did not execute it internally.

RuntimeError: stack expects a non-empty TensorList

Hi Pradyumna,

Thank you very much for sharing your code. This will definitely inspire a lot of future works. I met an issue when runing your code. Hope to get your help when you are spare.

I ran the training command. But it logs

/home/phtu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:102: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers argument(try 32 which is the number of cpus on this machine) in theDataLoader` init to improve performance.
rank_zero_warn(
Epoch 0: 100%|████████████| 13/13 [00:00<00:00, 179.52it/s, loss=nan, v_num=110]Traceback (most recent call last):
File "run.py", line 103, in
runner.fit(experiment)
File "/home/phtu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 458, in fit
self._run(model)
File "/home/phtu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 756, in _run
self.dispatch()
File "/home/phtu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 797, in dispatch
self.accelerator.start_training(self)
File "/home/phtu/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/phtu/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
self._results = trainer.run_stage()
File "/home/phtu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 807, in run_stage
return self.run_train()
File "/home/phtu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 869, in run_train
self.train_loop.run_training_epoch()
File "/home/phtu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 566, in run_training_epoch
self.on_train_epoch_end(epoch_output)
File "/home/phtu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 606, in on_train_epoch_end
training_epoch_end_output = model.training_epoch_end(processed_epoch_output)
File "/home/phtu/Research/Meta/Im2Vec/experiment.py", line 115, in training_epoch_end
avg_loss = torch.stack([x['loss'] for x in outputs]).mean()
RuntimeError: stack expects a non-empty TensorList

It seems the length of the var "outputs" is 0. Do you know a possible reason for this issue? Thank you.

Post-decoding process

1)According to the paper, the vectorized output elements from the decoder are first rasterized and then composited. Is that because the differential compositer only accepts images as input?
2) Is it possible to compose the vectorized elements first and then rasterize the composited output, assuming there is a differential compositer that accepts vectorized elements as inputs? Would this make it easier to extract the SVG directly from the output of the compositer? What impact, if any, would this have on the final rasterized output?

thanks

Dependency Conflict with Pillow

Hello there,

requirements.txt specified that the project needs nuscenes-devkit==1.0.8 and torchmeta==1.6.1. However, when I tried to install these two packages, a dependency conflict occurred, showing nuscenes-devkit==1.0.8 requires Pillow<=6.2.1 while torchmeta==1.6.1 requires Pillow>=7.0.0.

Can you please help resolve this conflict? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.