Giter Site home page Giter Site logo

fashionplus's Issues

unable to find test.p file

Hi,

Thanks for this amazing library, was just exploring and got following error. Can you please help me with this?

FileNotFoundError: [Errno 2] No such file or directory: 'FashionPlus/generation/datasets/demo/test.p'

Thanks.

Error in dimensions of model outputs while running for inference

Hi,

Thanks for this repo, the work is really interesting. I tried to run the given code for the sample data provided and had run into the following problem.

Encode clothing features
Image shape: torch.Size([1, 3, 256, 256])
Label shape: torch.Size([1, 1, 256, 256])
/content/drive/My Drive/projects/FashionPlus/generation/models/pix2pixHD_model.py:407: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
image = Variable(image.cuda(), volatile=True)
Traceback (most recent call last):
File "./encode_clothing_features.py", line 58, in
feat = model.module.encode_features(data['image'], data['label'])
File "/content/drive/My Drive/projects/FashionPlus/generation/models/pix2pixHD_model.py", line 423, in encode_features
val[0, k] = feat_map[idx[0], idx[1] + k, idx[2], idx[3]].data[0]
IndexError: invalid index of a 0-dim tensor. Use tensor.item() in Python or tensor.item<T>() in C++ to convert a 0-dim tensor to a number

I have noticed that you are using "TrainOptions" and don't have an inference script which uses "TestOptions" instead. Is this error potentially due to some parameter problems because of those options? Also, can you please let me know if there are any plans for providing an updated codebase, I think it was mentioned in some other issue that you are planning to.

Thanks in advance.

can't open file 'prepare_input_data.py'

Hello,

Thanks for this amazing library, I Run run_prepare_data.sh files, but got following error. Can you please help me with this?

python: can't open file 'prepare_input_data.py': [Errno 2] No such file or directory
Thanks

No module named options.train_options

On step 4 i got the following error #ImportError: No module named options.test_options
Traceback (most recent call last):
File "./encode_clothing_features.py", line 8, in
from options.train_options import TrainOptions
ImportError: No module named options.train_options

  1. Encode input images into latent codes:
    Change ROOT_DIR in script to FashionPlus' absolute path on your system.

cd preprocess
./encode_shape_texture_features.sh

/Desktop/FashionPlus-master/preprocess$ ./encode_shape_texture_features.sh ++ NZ=8 ++ OUTPUT_NC=18 ++ MAX_MULT=8 ++ DOWN_SAMPLE=7 ++ BOTNK=1d ++ LAMBDA_KL=0.0001 ++ DIVIDE_K=4 ++ for ARGUMENT in "$@" +++ echo CLASS=humanparsing +++ cut -f1 -d= ++ KEY=CLASS +++ echo CLASS=humanparsing +++ cut -f2 -d= ++ VALUE=humanparsing ++ case "$KEY" in ++ CLASS=humanparsing ++ for ARGUMENT in "$@" +++ echo LABEL_DIR=/home/monika/Desktop/FashionPlus-master//datasets/labels/ +++ cut -f1 -d= ++ KEY=LABEL_DIR +++ echo LABEL_DIR=/home/monika/Desktop/FashionPlus-master//datasets/labels/ +++ cut -f2 -d= ++ VALUE=/home/monika/Desktop/FashionPlus-master//datasets/labels/ ++ case "$KEY" in ++ LABEL_DIR=/home/monika/Desktop/FashionPlus-master//datasets/labels/ ++ for ARGUMENT in "$@" +++ echo SHAPE_GEN_PATH=/home/monika/Desktop/FashionPlus-master//checkpoint/ +++ cut -f1 -d= ++ KEY=SHAPE_GEN_PATH +++ echo SHAPE_GEN_PATH=/home/monika/Desktop/FashionPlus-master//checkpoint/ +++ cut -f2 -d= ++ VALUE=/home/monika/Desktop/FashionPlus-master//checkpoint/ ++ case "$KEY" in ++ SHAPE_GEN_PATH=/home/monika/Desktop/FashionPlus-master//checkpoint/ ++ python ./encode_features.py --phase test --dataroot ./datasets/demo --label_dir /home/monika/Desktop/FashionPlus-master//datasets/labels/ --label_txt_path ./datasets/humanparsing/clothing_labels.txt --dataset_param_file ./datasets/humanparsing/garment_label_part_map.json --name humanparsing --share_decoder --share_encoder --separate_clothing_unrelated --nz 8 --checkpoints_dir /home/monika/Desktop/FashionPlus-master//checkpoint/ --output_nc 18 --use_dropout --lambda_kl 0.0001 --max_mult 8 --n_downsample_global 7 --bottleneck 1d --resize_or_crop pad_and_resize --loadSize 256 --batchSize 1 --divide_by_K 4 Traceback (most recent call last): File "./encode_features.py", line 17, in <module> from options.test_options import TestOptions ImportError: No module named options.test_options Traceback (most recent call last): File "./encode_clothing_features.py", line 8, in <module> from options.train_options import TrainOptions ImportError: No module named options.train_options`

Is there other way to download the dataset?

I tried to download the dataset, but I don't know how to use baidunetdisk...

I went to the website with the password, but it seemed that I needed id and password for baidunetdisk.
Was that right?

I thought that I can download the dataset directly on the website you mentioned.

Texture Encoder Network structure

Hi,
I looked at the network structure of texture encoding. Why need to upsample after downsampling? Will it not lose a lot of details? Is it possible to get the encoding result directly without upsampling? and How is the fashion classifier trained?
Looking forward to your reply.
Thanks.

FileNotFoundError: [Errno 2] No such file or directory: 'results/Lab/demo/test_shape_codes.p'

I followed those step mentioned in the ReadMe file and it worked properly as expected until I reached the 4th command to run this command inside ./preprocess directory ./encode_shape_texture_features.sh

And here's the tail of the log where the error messages exist
/home/h/FashionPlus/checkpoint/humanparsing/latest_Separate_encoder.pth not exists yet! /home/h/FashionPlus/checkpoint/humanparsing/latest_Together_encoder.pth not exists yet! /home/h/FashionPlus/checkpoint/humanparsing/latest_Decoder.pth not exists yet! create web directory /home/h/FashionPlus/checkpoint/humanparsing/web... /home/h/anaconda3/envs/fashion/lib/python3.6/site-packages/torchvision/transforms/transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. "please use transforms.Resize instead.") /home/h/anaconda3/envs/fashion/lib/python3.6/site-packages/torchvision/transforms/transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. "please use transforms.Resize instead.") /home/h/anaconda3/envs/fashion/lib/python3.6/site-packages/torchvision/transforms/transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. "please use transforms.Resize instead.") Traceback (most recent call last): File "./encode_features.py", line 84, in <module> with open(save_name, 'wb') as writefile: FileNotFoundError: [Errno 2] No such file or directory: 'results/Lab/demo/test_shape_codes.p' THCudaCheck FAIL file=torch/csrc/cuda/Module.cpp line=32 error=38 : no CUDA-capable device is detected Traceback (most recent call last): File "./encode_clothing_features.py", line 18, in <module> opt = TrainOptions().parse() File "/home/h/FashionPlus/generation/options/base_options.py", line 122, in parse torch.cuda.set_device(self.opt.gpu_ids[0]) File "/home/h/anaconda3/envs/fashion/lib/python3.6/site-packages/torch/cuda/__init__.py", line 262, in set_device torch._C._cuda_setDevice(device) RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at torch/csrc/cuda/Module.cpp:32

And here are the configuration from the log:
./encode_shape_texture_features.sh ++ NZ=8 ++ OUTPUT_NC=18 ++ MAX_MULT=8 ++ DOWN_SAMPLE=7 ++ BOTNK=1d ++ LAMBDA_KL=0.0001 ++ DIVIDE_K=4 ++ for ARGUMENT in "$@" +++ echo CLASS=humanparsing +++ cut -f1 -d= ++ KEY=CLASS +++ cut -f2 -d= +++ echo CLASS=humanparsing ++ VALUE=humanparsing ++ case "$KEY" in ++ CLASS=humanparsing ++ for ARGUMENT in "$@" +++ echo LABEL_DIR=/home/h/FashionPlus/datasets/labels/ +++ cut -f1 -d= ++ KEY=LABEL_DIR +++ echo LABEL_DIR=/home/h/FashionPlus/datasets/labels/ +++ cut -f2 -d= ++ VALUE=/home/h/FashionPlus/datasets/labels/ ++ case "$KEY" in ++ LABEL_DIR=/home/h/FashionPlus/datasets/labels/ ++ for ARGUMENT in "$@" +++ echo SHAPE_GEN_PATH=/home/h/FashionPlus/checkpoint/ +++ cut -f1 -d= ++ KEY=SHAPE_GEN_PATH +++ echo SHAPE_GEN_PATH=/home/h/FashionPlus/checkpoint/ +++ cut -f2 -d= ++ VALUE=/home/h/FashionPlus/checkpoint/ ++ case "$KEY" in ++ SHAPE_GEN_PATH=/home/h/FashionPlus/checkpoint/ ++ python ./encode_features.py --phase test --dataroot ./datasets/demo --label_dir /home/h/FashionPlus/datasets/labels/ --label_txt_path ./datasets/humanparsing/clothing_labels.txt --dataset_param_file ./datasets/humanparsing/garment_label_part_map.json --name humanparsing --share_decoder --share_encoder --separate_clothing_unrelated --nz 8 --checkpoints_dir /home/h/FashionPlus/checkpoint/ --output_nc 18 --use_dropout --lambda_kl 0.0001 --max_mult 8 --n_downsample_global 7 --bottleneck 1d --resize_or_crop pad_and_resize --loadSize 256 --batchSize 1 --divide_by_K 4 ------------ Options ------------- aspect_ratio: 1.0 batchSize: 1 bottleneck: 1d center_crop: False checkpoints_dir: /home/h/FashionPlus/checkpoint/ cluster_path: features_clustered_010.npy condition_idx: None dataroot: ./datasets/demo dataset_mode: aligned dataset_param_file: ./datasets/humanparsing/garment_label_part_map.json display_id: 1 display_port: 8097 display_server: http://localhost display_winsize: 256 divide_by_K: 4 engine: None export_onnx: None fineSize: 256 gpu_ids: [0] how_many: 50 init_type: xavier input_nc: 3 isTrain: False label_dir: /home/h/FashionPlus/datasets/labels/ label_txt_path: ./datasets/humanparsing/clothing_labels.txt lambda_kl: 0.0001 loadSize: 256 load_feat_dir: ./results/ log_to_filename: /checkpoint/kimberlyhsiao/.visdom/ max_dataset_size: inf max_mult: 8 model: bicycle_gan nThreads: 4 n_blocks_global: 9 n_downsample_global: 7 n_samples: 5 name: humanparsing ndf: 64 nef: 64 ngf: 64 nl: relu no_flip: False norm: instance ntest: inf nz: 8 onnx: None output_nc: 18 phase: test reference_idx: None resize_or_crop: pad_and_resize results_dir: ./results/ separate_clothing_unrelated: True serial_batches: False share_decoder: True share_encoder: True suffix: swap_piece: None tf_log: False upsample: basic use_dropout: True verbose: False where_add: all which_direction: AtoB which_epoch: latest which_model_netE: resnet_256 which_model_netG: unet_256 -------------- End ---------------- dataset [AlignedDataset] was created #training images = 3 /home/h/FashionPlus/checkpoint/humanparsing/latest_Separate_encoder.pth not exists yet! /home/h/FashionPlus/checkpoint/humanparsing/latest_Together_encoder.pth not exists yet! /home/h/FashionPlus/checkpoint/humanparsing/latest_Decoder.pth not exists yet!

Unable to update - test_shape_codes.p (file) in results/Lab/demo/

Hello All,

I have tried all the steps mentioned in the repo. and its working but when i am trying the command or applying the last step to check the results with images

Command run :-

( ./scripts/edit_and_visualize_demo.sh 3.jpg shape_and_texture True 0 10 0.25)

After execution it gives me error

"Traceback (most recent call last):
  File "update_demo.py", line 596, in <module>
    piece_shape_feat_dict = pickle.load(readfile)
EOFError: Ran out of input
Traceback (most recent call last):
  File "process_face.py", line 124, in <module>
    assert(bbox is not None), 'Cannot find file %s in dictionary' % fname
AssertionError: Cannot find file final_3.jpg in dictionary"

Note : All though this my file in results/Lab/demo/test_shape_codes.p is already presented in mentioned path but its not updated with results.

Kindly suggest
Thanks in advance

how to run .py files step by step

Dear Academy, Hello.
I have read your paper, I am also very interested in your research.but when I run the .sh file have some error according to the method you said. now,I want to run the .py file by myself. I think this methods can deepen my understanding . Because my programming ability is very weak, I have some confused about running your code. Can you tell me how to run the .py file step by step?
thank you very much

IndexError: index 4 is out of bounds for dimension 1 with size 3

Hi,

Thanks for this amazing library . Can you please help me with this?

when I run the /FashionPlus/separate_vae/encode_features.py file , the following error occurs.

Traceback (most recent call last):
File "...FashionPlus/separate_vae/encode_features.py", line 63, in
label_encodings, num_labels = model.encode_features(Variable(data['input']))
File ".../FashionPlus/separate_vae/models/separate_clothing_encoder_models.py", line 201, in encode_features
zs_encoded[:, count_i*self.opt.nz: (count_i+1)*self.opt.nz] = self.Separate_encoder(real_B_encoded[:,label_i].unsqueeze(1))
IndexError: index 4 is out of bounds for dimension 1 with size 3

Thanks.

Issue with encode_shape_texture_features.sh

When I run encode_shape_texture_features.sh

I get the following error

File "/content/drive/My Drive/FashionPlus/separate_vae/data/pickle_dataset.py", line 21, in initialize
with open(os.path.join(opt.dataroot, '{}.p'.format(opt.phase)), 'rb') as readfile:
FileNotFoundError: [Errno 2] No such file or directory: './datasets/demo/test.p'

Now when I create an empty test.p file in seperation_vae/datasets/demo/test.p and run

I get the following error
File "/content/drive/My Drive/FashionPlus/separate_vae/data/pickle_dataset.py", line 22, in initialize
self.pickle_file = pickle.load(readfile)
EOFError: Ran out of input
Which I guess is due to reading an empty file.
How do you suggest to fix this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.