Comments (2)
In the appendix, I found the following grid-search for hyper-parameters, which I think is used in GCGL tasks too.
However, the GCGL datasets only contains less than 1000 samples per task, this means GEM is effectively storing the entire history of data for regularization purposes when n_memory is set to 1000 during hyper-param search, which I assume to have the best performance. However, this doesn't seem fair as it destroys the purpose of continual learning, but maybe you used different parameters for GCGL? I want to know the exact hyperparameter which is used for the evaluation of GEM-GCGL if possible.
Thank you Wei
Hi Wei,
Sorry for the misunderstanding. In our experiments, the memory size is set as 100.
One thing to mention is that larger memory may not always good for methods like GEM. Since GEM clips the gradient of the new tasks with the gradients calculated with data stored in memory, the larger the memory is, the less flexibility the model has for adapting to the new tasks, which may influence the performance on new tasks and the overall performance. When the buffered data contains some noisy examples or outliers, this adverse effect may be more significant. Therefore, there is a trade-off between the buffer size and the flexibility (capability to adapt to new tasks).
from cglb.
Thank you for the clarification about the hyperparameters, and yeah indeed, larger memory indeed makes the model less flexible. I think the effect is also more severe if the tasks are small. Thank you for the additional insight.
from cglb.
Related Issues (20)
- Redundant training in testing phase? HOT 2
- Performance matrix visualization for GCGL is not working correctly HOT 2
- add more units to the output layer in LWF HOT 2
- Problem in utils.py HOT 2
- Reddit Dataset Batch Size Issue HOT 2
- GEM baseline in class-IL for GCGL has wrong indentation for optimizer step? HOT 2
- GCGL set_random seed is never called HOT 2
- Problem in function pipeline_task_IL_inter_edge_minibatch HOT 1
- Difference in joint train performance for aromaticity CL of CGLB and BeGin HOT 2
- GEM Memory_data HOT 1
- GCGL, TWP error when using higher GCN hidden units HOT 1
- SIDER-tIL Jointtrain Cannot be reproduced HOT 1
- Class Incremental for Node Classification EWC and MAS cannot be reproduced HOT 2
- Error occurred for jointtrain in GCGL HOT 3
- error report when implementing GEM on Tox21 datasets HOT 1
- ValueError from GEM on Tox21 HOT 1
- Is ```observe_class_IL_batch``` function considering inter-edge connections in ```pipeline_class_IL_no_inter_edge_minibatch```?
- Environment configuration
- non-continuous categories
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cglb.