Comments (4)
Hi Simone,
Thanks a million for your valuable advice! I'll try that.
Best regards,
Shutong
from mips.
Hi Shutong,
Thanks for the question!
The volume of the 2D front is the "length" of the curve.
You can check Munkres' book
http://fourier.math.uoc.gr/~papadim/calculus_on_manifolds/Munkres.pdf
Chapter 22
Also, if you check my paper
https://www.ias.informatik.tu-darmstadt.de/uploads/Site/EditPublication/PARISI_JAIR_MORL.pdf
Eq. 3, the loss is the integral of the indicator over the volume. In the case of a 2D front, think of it as "walking" over the curve and summing the value of the indicator I at each point.
A larger volume does not imply a larger hypervolume, as in the image below, because we do not take into account the reference point.
The blue front has smaller volume, but larger hypervolume w.r.t. to the red reference point.
So it is normal that the volume V oscillates during the learning.
Best,
Simone
from mips.
Hi Simone,
Many thanks for your reply! Yes it really makes sense that in case of 2D front, V indicates the length, I and V indicate the hypervolume together.
Now I would like to use PMGA algorithm in my problem. I find that the loss (and volume) always oscillates and increases at first, but it then drops and never increases again, as shown in the following figure (the x axis is the iterations and y axis is the loss). I have no idea why it cannot keep increasing or be convergent, and which part has effect on this. Have you ever faced this situation? Maybe I need to increase the number of episodes, increase the number of agents in each iteration, or decrease the learning rate?
Best regards,
Shutong
from mips.
Hi Shutong,
I often encounter this problem in RL, but it never happened with PMGA. However, I applied it on relatively easy problems. My best bet is that the indicator function can't evaluate solutions accurately once almost all of them are close to the frontier, and PMGA starts behaving weirdly. I noticed it with the proposed indicator (the "mixed" ones) on some MOO benchmarks, but I didn't test it extensively. The thing is that these two indicators are sensitive to the the hyperparameter lambda
.
You can of course to decrease the learning rate, increase the number of steps/episodes, and also to use different (maybe richer) functions to approximate the manifold.
Best,
Simone
from mips.
Related Issues (4)
- Questions on the PMGA algorithm HOT 6
- How to run PMGA HOT 1
- 3 obj issue HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mips.