Giter Site home page Giter Site logo

crazorback / aadg Goto Github PK

View Code? Open in Web Editor NEW
44.0 1.0 3.0 538 KB

[TMI'22] "AADG: Automatic Augmentation for Domain Generalization on Retinal Image Segmentation".

Home Page: https://arxiv.org/abs/2207.13249

Python 100.00%
domain-generalization pytorch reinforcement-learning retinal-images

aadg's People

Contributors

crazorback avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

aadg's Issues

Issues related to dataset and code

Hello author, I am very interested in your paper and code.Thanks for opensourcing the great work. But I found some questions, when comparing label and img. I found that when loading HRF datasets, the label and img did not correspond. The main reason is that the format of HRF image is JPG and jpg, resulting in the removal of images in the format JPG during loading, causing a mismatch between img and label.
image
In addition, I would like to ask why the top ten images are taken during training and the bottom ten images are taken during testing when loading the STARE dataset. Why not take an entire STARE dataset for testing?
image

Doubt regarding evaluation

Hello! :)

First of all, thank you for sharing the code of your work. It's really useful!

I have a question regarding your experiments. I have noticed from your ".yaml" files that you often run your code with K=3 source TRAIN datasets, and you leave one out as TEST. This TEST domain is your target domain, and you use it at evaluation time, to calculate metrics and save the best checkpoint. Are my suppositions correct?

differ dataset report different issue

HI,

thank your excellent code. but i have some questions when i run it.

when i used python run.py --cfg experiments/optic_sinkhorn/diversity.yaml --output_dir output/
it raised
main()
File "run.py", line 54, in main
lanuch_mp_worker(search_worker, config, args)
File "F:\WORKS\RLretinalseg\AADG-main\distributed.py", line 31, in lanuch_mp_worker
main_worker(args.gpu, ngpus_per_node, config, args)
File "F:\WORKS\RLretinalseg\AADG-main\search.py", line 33, in search_worker
search_seg_dg_policy(gpu, ngpus_per_node, config, args)
File "F:\WORKS\RLretinalseg\AADG-main\search_dg.py", line 330, in search_seg_dg_policy
dis_criterion, model_optimizer, dis_optimizer, epoch, writer_dict, logger)
File "F:\WORKS\RLretinalseg\AADG-main\search_dg.py", line 51, in pretrain
dis_output = discriminator(feature.detach())
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "F:\WORKS\RLretinalseg\AADG-main\models\discriminator.py", line 53, in forward
fe = self.dis(x)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\container.py", line 117, in forward
input = module(input)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\functional.py", line 1690, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: mat1 dim 1 must match mat2 dim 0

F:\WORKS\RLretinalseg\AADG-main>python run.py --cfg experiments/optic_sinkhorn/diversity.yaml --output_dir output/
[pyKeOps]: Warning, no cuda detected. Switching to cpu only.
Use GPU: 0 for training
=> creating model 'deeplabv3+' with 'mobilenet_v2
=> creating RNN controller
=> creating discriminator 'momentum_feature'
==> Loading train data from: ./dataset/Fundus/Domain1\train\ROIs/image/
==> Loading train data from: ./dataset/Fundus/Domain2\train\ROIs/image/
==> Loading train data from: ./dataset/Fundus/Domain3\train\ROIs/image/
-----Total number of images in train: 469
==> Loading test data from: ./dataset/Fundus/Domain4\test\ROIs/image/
-----Total number of images in test: 80
=> creating output\optic\diversity_2022-10-12-12-08
=> creating log\optic\deeplabv3+\diversity_2022-10-12-12-08_2022-10-12-12-08
[pyKeOps]: Warning, no cuda detected. Switching to cpu only.
[pyKeOps]: Warning, no cuda detected. Switching to cpu only.
[pyKeOps]: Warning, no cuda detected. Switching to cpu only.
[pyKeOps]: Warning, no cuda detected. Switching to cpu only.
Traceback (most recent call last):
File "run.py", line 65, in
main()
File "run.py", line 54, in main
lanuch_mp_worker(search_worker, config, args)
File "F:\WORKS\RLretinalseg\AADG-main\distributed.py", line 31, in lanuch_mp_worker
main_worker(args.gpu, ngpus_per_node, config, args)
File "F:\WORKS\RLretinalseg\AADG-main\search.py", line 33, in search_worker
search_seg_dg_policy(gpu, ngpus_per_node, config, args)
File "F:\WORKS\RLretinalseg\AADG-main\search_dg.py", line 332, in search_seg_dg_policy
dis_criterion, model_optimizer, dis_optimizer, epoch, writer_dict, logger)
File "F:\WORKS\RLretinalseg\AADG-main\search_dg.py", line 51, in pretrain
dis_output = discriminator(feature.detach())
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "F:\WORKS\RLretinalseg\AADG-main\models\discriminator.py", line 53, in forward
fe = self.dis(x)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\container.py", line 117, in forward
input = module(input)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\functional.py", line 1690, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: mat1 dim 1 must match mat2 dim 0

when i used python run.py --cfg experiments/rvs_sinkhorn/diversity_ex.yaml --output_dir output/
it raised
Traceback (most recent call last):
File "run.py", line 65, in
main()
File "run.py", line 54, in main
lanuch_mp_worker(search_worker, config, args)
File "F:\WORKS\RLretinalseg\AADG-main\distributed.py", line 31, in lanuch_mp_worker
main_worker(args.gpu, ngpus_per_node, config, args)
File "F:\WORKS\RLretinalseg\AADG-main\search.py", line 35, in search_worker
search_seg2d_dg_policy(gpu, ngpus_per_node, config, args)
File "F:\WORKS\RLretinalseg\AADG-main\search_dg_2d.py", line 288, in search_seg2d_dg_policy
train_samplers, train_loader, test_loader = get_seg_dg_dataloader(config, args, batch_size, workers)
File "F:\WORKS\RLretinalseg\AADG-main\data\dataloader.py", line 19, in get_seg_dg_dataloader
testset = RetinalVesselSegmentation(dataroot, phase='test', splitid=cfg.DATASET.DG.TEST, transform=transform_test)
File "F:\WORKS\RLretinalseg\AADG-main\data\vessel.py", line 67, in init
self.image_list.append({'image': image_path, 'label': gtPath[idx], 'roi': roiPath[idx]})
IndexError: list index out of range

Loading test data from: ./dataset/RVS/STARE is wrong

Hello author, thank you for your excellent paper and code, but when I run
python run.py --cfg experiments/rvs_sinkhorn/diversity_ex.yaml --output_dir output/,
the following error appears:
=> creating discriminator 'momentum_feature'
==> Loading train data from: ./dataset/RVS/CHASEDB1/train
==> Loading train data from: ./dataset/RVS/DRIVE/train
==> Loading train data from: ./dataset/RVS/HRF/train
img_num: 50
key STARE has no data
20 images in CHASEDB1
20 images in DRIVE
10 images in HRF
-----Total number of images in train: 50
==> Loading test data from: ./dataset/RVS/STARE
Traceback (most recent call last):
File "run.py", line 65, in
main()
File "run.py", line 54, in main
lanuch_mp_worker(search_worker, config, args)
File "/root/autodl-tmp/AADG-main/distributed.py", line 31, in lanuch_mp_worker
main_worker(args.gpu, ngpus_per_node, config, args)
File "/root/autodl-tmp/AADG-main/search.py", line 35, in search_worker
search_seg2d_dg_policy(gpu, ngpus_per_node, config, args)
File "/root/autodl-tmp/AADG-main/search_dg_2d.py", line 288, in search_seg2d_dg_policy
train_samplers, train_loader, test_loader = get_seg_dg_dataloader(config, args, batch_size, workers)
File "/root/autodl-tmp/AADG-main/data/dataloader.py", line 19, in get_seg_dg_dataloader
testset = RetinalVesselSegmentation(dataroot, phase='test', splitid=cfg.DATASET.DG.TEST, transform=transform_test)
File "/root/autodl-tmp/AADG-main/data/vessel.py", line 67, in init
self.image_list.append({'image': image_path, 'label': gtPath[idx], 'roi': roiPath[idx]})
IndexError: list index out of range
Could you please advise on how to resolve it?

A few quick questions about testing

Hello, I didn't find the test file that loaded the training weight in the process of reproducing the code. I wrote a test code myself, but it didn't feel very good, could you please open source related test split code?

__init__() missing 1 required positional argument: 'classes'

Hello, the author. Recently, we are replicating your paper and experimenting according to the process you disclosed. However, TypeError is always reported during the running process:__ init__ () Missing 1 required positional argument: 'classes', I look forward to your help.

Regarding the Issue of Data Augmentation

Thank you for your excellent paper and code. I have a few questions about your data augmentation method compared to traditional methods:

  • Is it true that no data augmentation method is used within the WARMUP_EPOCH: 60 period, and only the model and discriminator parameters are updated?
  • If I set WARMUP_EPOCH=END_EPOCH=300, does it mean there is no effect of data augmentation?
    Looking forward to your reply.

Calculating dist_12=sinkhorn(domain1_fe,domain2_fe) is wrong

Hello author, Thanks for your great work and code! But when I run
python run.py --cfg experiments/rvs_sinkhorn/diversity_ex.yaml --output_dir output/,
the following error appears:

Epoch: [59][0/7] Time 0.605s (0.605s) Speed 39.7 samples/s Data 0.492s (0.492s) Seg Loss 0.24492 (0.24492) Dis Loss 0.48106 (0.48106)
Epoch: [59][5/7] Time 0.083s (0.175s) Speed 288.7 samples/s Data 0.000s (0.082s) Seg Loss 0.26395 (0.25965) Dis Loss 0.56321 (0.47989)
Train Epoch 59 time:0.1616 seg loss:0.2602 dis loss:0.4747 dsc@foreground:0.5191
Test Epoch 59 time:0.2243 dsc@foreground:0.7400 acc:0.9387 aucroc:0.9699 sp:0.9785 se:0.6822
=> saving checkpoint to output/rvs/diversity_ex_2023-03-09-22-08
=> best: False
[pyKeOps] Compiling libKeOpstorch18d5d2bc69 in /home/wmj/.cache/pykeops-1.5-cpython-38:
formula: Max_SumShiftExp_Reduction(( B - (P * ( IntCst(1) - (X | Y) / ( Norm2(X) * Norm2(Y) ) ) ) ),0)
aliases: X = Vi(0,128); Y = Vj(1,128); B = Vj(2,1); P = Pm(3,1);
dtype : float32
...
In file included from /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include/pybind11/detail/../detail/common.h:258:0,
from /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include/pybind11/detail/../attr.h:13,
from /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include/pybind11/detail/class.h:12,
from /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include/pybind11/pybind11.h:13,
from /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include/torch/csrc/utils/pybind.h:9,
from ./torch_headers.h:19,
from :0:
/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/cmake_scripts/script_template/../../version:1:1: error: expected unqualified-id before numeric constant
1.5
^~~
In file included from /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include/pybind11/detail/../cast.h:14:0,
from /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include/pybind11/detail/../attr.h:14,
from /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include/pybind11/detail/class.h:12,
from /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include/pybind11/pybind11.h:13,
from /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include/torch/csrc/utils/pybind.h:9,
from ./torch_headers.h:19,
from :0:
/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include/pybind11/detail/../detail/descr.h:33:45: error: ‘index_sequence’ has not been declared
constexpr descr(char const (&s)[N + 1], index_sequence<Is...>) : text{s[Is]..., '\0'} {}
^~~~~~~~~~~~~~
compilation terminated due to -fmax-errors=2.
make[3]: *** [CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/torch/generic/generic_red.cpp.o] Error 1
make[2]: *** [CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/all] Error 2
make[1]: *** [CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/rule] Error 2
make: *** [libKeOps_template_9ffbe9dbe4] Error 2

--------------------- MAKE DEBUG -----------------
Command '['cmake', '--build', '.', '--target', 'libKeOps_template_9ffbe9dbe4', '--', 'VERBOSE=1']' returned non-zero exit status 2.
/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/cmake/data/bin/cmake -S/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/cmake_scripts/script_template -B/home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4 --check-build-system CMakeFiles/Makefile.cmake 0
/usr/bin/make -f CMakeFiles/Makefile2 libKeOps_template_9ffbe9dbe4
make[1]: Entering directory '/home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4'
/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/cmake/data/bin/cmake -S/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/cmake_scripts/script_template -B/home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4 --check-build-system CMakeFiles/Makefile.cmake 0
/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/cmake/data/bin/cmake -E cmake_progress_start /home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4/CMakeFiles 2
/usr/bin/make -f CMakeFiles/Makefile2 CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/all
make[2]: Entering directory '/home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4'
/usr/bin/make -f CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/build.make CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/depend
make[3]: Entering directory '/home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4'
cd /home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4 && /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/cmake/data/bin/cmake -E cmake_depends "Unix Makefiles" /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/cmake_scripts/script_template /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/cmake_scripts/script_template /home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4 /home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4 /home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4/CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/DependInfo.cmake --color=
make[3]: Leaving directory '/home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4'
/usr/bin/make -f CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/build.make CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/build
make[3]: Entering directory '/home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4'
[ 50%] Building CXX object CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/torch/generic/generic_red.cpp.o
/usr/bin/c++ -DCUDA_BLOCK_SIZE=192 -DC_CONTIGUOUS=1 -DMAXIDGPU=1 -DMAXTHREADSPERBLOCK0=1024 -DMAXTHREADSPERBLOCK1=1024 -DMODULE_NAME=libKeOps_template_9ffbe9dbe4 -DSHAREDMEMPERBLOCK0=49152 -DSHAREDMEMPERBLOCK1=49152 -DUSE_CUDA=1 -DUSE_DOUBLE=0 -DUSE_HALF=0 -D_FORCE_INLINES -D_GLIBCXX_USE_CXX11_ABI=0 -D__TYPE__=float -DlibKeOps_template_9ffbe9dbe4_EXPORTS -I/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/cmake_scripts/script_template/../.. -I/home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4 -I/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/cmake_scripts/script_template/../../keops -I/usr/local/cuda-11.7/targets/x86_64-linux/include -I/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include -I/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/wmj/anaconda3/envs/pytorch1.13/include/python3.8 -isystem /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/pybind11/include -DUSE_OPENMP -fopenmp -Wall -Wno-unknown-pragmas -fmax-errors=2 -O3 -DNDEBUG -O3 -fPIC -fvisibility=hidden -include torch_headers.h -flto -fno-fat-lto-objects -std=gnu++14 -MD -MT CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/torch/generic/generic_red.cpp.o -MF CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/torch/generic/generic_red.cpp.o.d -o CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/torch/generic/generic_red.cpp.o -c /home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/torch/generic/generic_red.cpp
CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/build.make:76: recipe for target 'CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/torch/generic/generic_red.cpp.o' failed
make[3]: Leaving directory '/home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4'
CMakeFiles/Makefile2:99: recipe for target 'CMakeFiles/libKeOps_template_9ffbe9dbe4.dir/all' failed
make[2]: Leaving directory '/home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4'
CMakeFiles/
make[1]: Leaving directory '/home/wmj/.cache/pykeops-1.5-cpython-38/build-pybind11_template-libKeOps_template_9ffbe9dbe4'
Makefile:124: recipe for target 'libKeOps_template_9ffbe9dbe4' failed


Traceback (most recent call last):
File "/home/wmj/DG/AADG/run.py", line 71, in
main()
File "/home/wmj/DG/AADG/run.py", line 60, in main
lanuch_mp_worker(search_worker, config, args)
File "/home/wmj/DG/AADG/distributed.py", line 31, in lanuch_mp_worker
main_worker(args.gpu, ngpus_per_node, config, args)
File "/home/wmj/DG/AADG/search.py", line 35, in search_worker
search_seg2d_dg_policy(gpu, ngpus_per_node, config, args)
File "/home/wmj/DG/AADG/search_dg_2d.py", line 345, in search_seg2d_dg_policy
normalized_rewards = train(config, train_loader, model, discriminator, model_criterion,
File "/home/wmj/DG/AADG/search_dg_2d.py", line 161, in train
dist_12 = sinkhorn(domain1_fe, domain2_fe)
File "/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/geomloss/samples_loss.py", line 266, in forward
values = routines[self.loss][backend](
File "/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/geomloss/sinkhorn_samples.py", line 148, in sinkhorn_online
a_x, b_y, a_y, b_x = sinkhorn_loop(
File "/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/geomloss/sinkhorn_divergence.py", line 192, in sinkhorn_loop
a_x = λ * softmin(ε, C_xx, α_log) # OT(α,α)
File "/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/geomloss/sinkhorn_samples.py", line 97, in softmin_online
return -ε * log_conv(x, y, f_y.view(-1, 1), torch.Tensor([1 / ε]).type_as(x)).view(
File "/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/torch/generic/generic_red.py", line 568, in call
out = GenredAutograd.apply(
File "/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/torch/generic/generic_red.py", line 47, in forward
myconv = LoadKeOps(
File "/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/common/keops_io.py", line 48, in init
self._safe_compile()
File "/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/common/utils.py", line 75, in wrapper_filelock
func_res = func(args, **kwargs)
File "/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/common/keops_io.py", line 55, in _safe_compile
compile_generic_routine(
File "/home/wmj/anaconda3/envs/pytorch1.13/lib/python3.8/site-packages/pykeops/common/compile_routines.py", line 269, in compile_generic_routine
fname = list(pathlib.Path(template_build_folder).glob(template_name + "
.so"))[
IndexError: list index out of range
Could you give me some suggestions on how to solve it? Thank you.

复现表格

dear author:
如何复现表格2的结果呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.