Comments (9)
To compute a task matrix of accurcies, you can use the flag --results-dict
when running main.py
. At the end of each task, the accuracy is then computed for each task so far and stored in 'plotting_dict':
Lines 343 to 345 in 11215d2
If you want to do the above while also computing the accuracy for future tasks, you can change this if
-statement:
continual-learning/eval/evaluate.py
Lines 91 to 98 in 11215d2
Hope this helps!
from continual-learning.
Hi, for the experiment you describe (with one class per task and no storing of data from past tasks), I would indeed expect that for many continual learning methods the result should be a model that only predicts the last class. Regarding the errors, if you give some more details I can see whether I could help.
from continual-learning.
Thanks for replying @GMvandeVen, I need to recreate the errors and will post them as soon as I find them again. In the meantime, could you please also share if there is any way to obtain a task matrix of accuracies to obtain metrics like BWT, FWT, Forgetting Measure and Learning Accuracy?
from continual-learning.
Here's the error that I got,
(cl-pytorch) [[email protected]@GPU12 continual-learning]$ ./compare_task_free.py --experiment=CIFAR10 --scenario=class --iters 1 --budget 1 --contexts 10 --replay none --joint --stream academic-setting
usage: ./compare_task_free.py [-h] [--seed SEED] [--n-seeds N_SEEDS] [--no-gpus] [--no-save] [--full-stag STAG] [--full-ltag LTAG] [--data-dir D_DIR] [--model-dir M_DIR] [--plot-dir P_DIR] [--results-dir R_DIR]
[--time] [--visdom] [--results-dict] [--acc-n ACC_N] [--experiment {splitMNIST,permMNIST,CIFAR10,CIFAR100}] [--stream {fuzzy-boundaries,academic-setting,random}] [--fuzziness ITERS]
[--scenario {task,domain,class}] [--contexts N] [--iters ITERS] [--batch BATCH] [--no-norm] [--conv-type {standard,resNet}] [--n-blocks N_BLOCKS] [--depth DEPTH]
[--reducing-layers RL] [--channels CHANNELS] [--conv-bn CONV_BN] [--conv-nl {relu,leakyrelu}] [--global-pooling] [--fc-layers FC_LAY] [--fc-units N] [--fc-drop FC_DROP]
[--fc-bn FC_BN] [--fc-nl {relu,leakyrelu,none}] [--z-dim Z_DIM] [--singlehead] [--lr LR] [--optimizer {adam,sgd}] [--momentum MOMENTUM] [--pre-convE] [--convE-ltag LTAG]
[--seed-to-ltag] [--freeze-convE] [--recon-loss {MSE,BCE}] [--update-every N] [--replay-update N] [--xdg] [--gating-prop PROP] [--fc-units-sep N] [--epsilon EPSILON] [--c SI_C]
[--temp TEMP] [--budget BUDGET] [--eps-agem EPS_AGEM] [--eval-s EVAL_S] [--fc-units-gc N] [--fc-lay-gc N] [--z-dim-gc N] [--no-context-spec] [--no-si] [--no-agem]
./compare_task_free.py: error: argument --replay-update: invalid int value: 'none'
I am trying to recreate a setting where there are no task boundaries provided and no replay
from continual-learning.
The function ./compare_task_free.py
does not have an option --replay
. By giving --replay none
as input, you set --replay-update
to none
, which is not a valid value for that option.
from continual-learning.
So can I still do no replay and one class per task with no task boundaries with ./compare_task_free.py
? Apart from the changes you suggested in main.py
, are there any other changes needed so I could obtain a task matrix for all methods with compare_task_free.py
?
Thanks! for your help
from continual-learning.
In principle you can use ./compare_task_free.py
with one class per task and no replay, but note that a substantial amount of the methods that are compared in this script expect to store data and/or use replay.
Regarding obtaining the task matrices, with the changes I described it should indeed be possible to obtain such task matrices, although you of course have to make a few changes to the code yourself to then obtain them in the format you want.
from continual-learning.
I made changes to #28 (comment)
and removed the or (i+1 <= current_context)
so that a task matrix is stored in the store/results
folder. But, the results folder still has text files with only single accuracy value and not a task matrix. I would highly appreaciate your help here @GMvandeVen
from continual-learning.
The values of the task matrix should then be stored in the dictionary plotting_dict
. This dictionary is not written out to a text file by default, you would have to change the code yourself to do that.
from continual-learning.
Related Issues (20)
- Empirical Fisher Estimation HOT 3
- Datasets more complicated than MNIST HOT 1
- Just a request
- Grad in SI HOT 4
- Wrong dataset? HOT 2
- why batch_size has to be 1 when update fisher? HOT 1
- Lower/Upper Bound Experiments HOT 2
- one little confusion about the loss_fn_kd function HOT 1
- Suspicious Precision HOT 3
- Link error HOT 2
- Reproducing BI+SI method HOT 9
- about kafc fisher infromation matrix HOT 1
- How to create Resnet34 HOT 2
- Joint training results different for different types of incremental learning? HOT 3
- Task-IL evaluation HOT 2
- Single head or multihead task incremental HOT 1
- Whether context identity must be inferred in case of domain increment? HOT 1
- About printing results of experimental output
- Results for None ("lower target")
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from continual-learning.