Giter Site home page Giter Site logo

Reproduce DDAIG about dassl.pytorch HOT 10 CLOSED

kaiyangzhou avatar kaiyangzhou commented on July 20, 2024
Reproduce DDAIG

from dassl.pytorch.

Comments (10)

FrancescoCappio avatar FrancescoCappio commented on July 20, 2024

Hi!
From what I understand DDAIG code based on Dassl does not use the validation splits of training sets. Is this correct? The paper in fact does not mention any use of the validation split to perform model selection (unless I missed something). I am asking this because looking at your log files I noticed that DDAP code used to evaluate the performance after each epoch also on the validation set. Was this evaluation used to choose the best model?

from dassl.pytorch.

KaiyangZhou avatar KaiyangZhou commented on July 20, 2024

You're correct. The validation set was not used for training. Only the training split is used, which follows this paper.

For DDAP (DDAIG) we evaluated lambda on the test domain (as shown in table 5), just wanted to show the real impact on test data. In another paper we used the val set for hyperparam selection.

We reported performance just using the last-epoch model. It's kinda weird to use val performance as a metric for model selection in DG as val data come from source domains and a higher val result might mean overfitting, so ...

The val performance is only printed in the old version code. In Dassl, you need to set TEST.SPLIT = val in order to evaluate a model on the validation set.

from dassl.pytorch.

FrancescoCappio avatar FrancescoCappio commented on July 20, 2024

I noticed 2 differences in the parameters between your logs and standard parameters here in Dassl: DDAIG.LMDA and DDAIG.WARMUP, however I don't think the difference in performance is caused by these small differences as, even using your values, I can't reproduce your performance. I also noticed that ddap.lmda_p takes different values in your logs: 0.1 for art_paiting and photo and 0.5 for sketch and cartoon. Should I also use these different values for my experiments?

from dassl.pytorch.

KaiyangZhou avatar KaiyangZhou commented on July 20, 2024

Yes, try setting INPUT.PIXEL_MEAN=[0.5, 0.5, 0.5] (same for PIXEL_STD), and using exactly the same hyperparams in the log files

from dassl.pytorch.

FrancescoCappio avatar FrancescoCappio commented on July 20, 2024

I tried setting both (pixel_mean and pixel_std) but I still cannot reproduce your performance

from dassl.pytorch.

KaiyangZhou avatar KaiyangZhou commented on July 20, 2024

so what results did you get exactly?

can you show the std as well?

from dassl.pytorch.

FrancescoCappio avatar FrancescoCappio commented on July 20, 2024

I am attaching logs for run 0 here. Results are:

run art cartoon sketch photo
run 0 79.39 75.85 75.64 95.27
run 1 79.05 75.04 69.12 94.07

art_painting_log.txt
cartoon_log.txt
photo_log.txt
sketch_log.txt

from dassl.pytorch.

KaiyangZhou avatar KaiyangZhou commented on July 20, 2024

@FrancescoCappio cool, thx, I'll have a look!

from dassl.pytorch.

KaiyangZhou avatar KaiyangZhou commented on July 20, 2024

To follow the old version code, you need to use input mean of 0 and input std of 1 so that the pixel value will be ranged between [0, 1] (I gave the wrong information, so sorry about that).

The pixel value range is important as the FCN's output is squashed in [-1, 1]. See this.

I ran this Dassl code and used a higher lmda=1.5 (to reproduce the effect when using imagenet statistics, we need to accordingly increase lmda).

I did experiment only on the art and sketch domains as they are the most challenging ones, I got

run art sketch
1 84.62 76.43
2 82.13 75.48

for the cartoon and photo domains, lmda values might be different

hope this would help

from dassl.pytorch.

KaiyangZhou avatar KaiyangZhou commented on July 20, 2024

just tried using mean of 0 and std of 1 to make the pixel range fall in [0, 1]

on art, I used the default lmda=0.3, I got

run1: 83.20
run2: 85.11
run3: 82.86

avg: 83.72
std: 0.99

more runs should be done to reduce the variance to get a fairer number

different domains might need a different lmda, that's the tricky part, like for sketch, a higher weight is favored, e.g. 0.5 or 0.7

I've updated the config files to add the new pixel mean and std values

I'm closing this issue for now

from dassl.pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.