Comments (9)
@JizhiziLi , I just wanted to say thanks for investigating this issue. I saw your analysis on issue #10, I think it explains everything.
from aim.
Joining the question about the last two images (which is the most common use-case). Any reason for it to output this result?
from aim.
I faced the same things, Is there any progress?
from aim.
@Tetsujinfr Did you manage to work around this?
from aim.
Joining the discussion, is there any progress?
P.S. Thank you for this amazing repository and research!
from aim.
@Tetsujinfr Did you manage to work around this?
No I did not have time to investigate it further. I do not have bandwith to re-train or really deepdive on this. I am not sure if I did something wrong but I do not understand why the 3 test images in the repo (teddy, drop and spider web) do not render exactly as the provided results in the repo. I suspect those repo renders might have been done with the trained model which has been lost, hence the differences when using the new pre-trained model on my machine, but this is just an assumption on my end.
Would love to hear from the repo owner.
from aim.
Hi there @Tetsujinfr,
Thanks for showing interest in our research and project!
First, for the 3 test images shown in the repo (teddy, drop and spider web), the results are tested by our released pre-trained model. Just follow the instruction in Inference Code - Test on your sample images should show the exact same results. I don't know why you got the different results, perhaps because of using different version of Pytorch? Please note that my test environment is Cuda 10.2, Pytorch 1.7.1, and python 3.7.7. Please also be aware to set up configuration test_choice=Hybrid
in file core/scripts/test_samples.sh
and make sure that global_ratio=1/4, local_ratio=1/2
in the file core/test.py
.
Second, For the other two images. I take screenshot of your images, render the format to jpg, and have run the inference code from my side, I actually get some good results. I show the input image size/format, the test strategy I have used, and the results as follows.
Image 1, input size: 1408x2868, image format: .jpg, test_choice: RESIZE, test resize ratio: 1/4:
Image 2, input size: 1500x746, image format: .jpg, test_choice: HYBRID, test resize ratio: global_ratio=1/4, local_ratio=1/2:
Third, please note that since our model has been trained only on limited synthetic matting dataset, the performance may vary when adapting to other images, e.g., low-resolution images, images with the salient object shown as a large portion, etc. To get better results for such images, you can try:
- modify the samples images' resolutions and formats to be similar to our samples images (e.g. shorter path as 1080 pixels, image format as jpg);
- using different
test_choice
incore/scripts/test_samples.sh
and modify theresize ratio (global_ratio, local_ratio, resize_h, resize_w)
incore/test.py
to fit your images; - re-train the model on other training set while the training code will be released once we finished cleanup the codebase.
Let me know if you still encounter problems, thanks!
Cheers,
Jizhizi Li
from aim.
@JizhiziLi Thanks to your explanation for testing non-AIM 500 images. If I want to infer own images, do I just modify the dataset_choice to SAMPLES from AIM_500?
from aim.
Hi @bruinxiong , yes, modify the data_choice from AIM_500 to SAMPLES will switch the inference images to your own sample images.
from aim.
Related Issues (20)
- Impossible to obtain the same results with the pretrained model HOT 4
- Training code HOT 1
- No predictions for smaller objects HOT 2
- about fps Can it be in real time? HOT 1
- No such file or directory: '/AIMmodels/pretrained/r34mp_pretrained_imagenet.pth.tar' HOT 1
- DUTS SOD accuracy and loss values
- Number of iterations for each epochs HOT 1
- Rotated predictions
- CoreML
- Wrong SAD result HOT 1
- When do you plan to release the training code? HOT 2
- USR type json file does not match DIM, HAttMatting dataset file name HOT 1
- ValueError: operands could not be broadcast together with shapes (900,1600,1) (900,1600)
- json file names do not match
- 啥时候放代码呀,非常期待 HOT 1
- Access the pre-trained models? HOT 1
- Question about the results for DIM+Trimap HOT 1
- Training and Testing code ? HOT 2
- Training code HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aim.