fgerzer / apsis Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
After the initial random points, only adds 0.5 as a new point.
Corresponding warning seems to be
/usr/local/lib/python2.7/dist-packages/GPy/core/transformations.py:32: RuntimeWarning: overflow encountered in exp
return np.where(x>lim_val, x, np.log(1. + np.exp(x)))
/usr/local/lib/python2.7/dist-packages/GPy/kern/parts/rbf.py:256: RuntimeWarning: divide by zero encountered in divide
X = X / self.lengthscale
/usr/local/lib/python2.7/dist-packages/GPy/kern/parts/rbf.py:258: RuntimeWarning: invalid value encountered in add
self._K_dist2 = -2.*tdot(X) + (Xsquare[:, None] + Xsquare[None, :])
/usr/local/lib/python2.7/dist-packages/GPy/util/linalg.py:121: RuntimeWarning: invalid value encountered in less_equal
if np.any(diagA <= 0.):
Like with f259835
Candidate._init currently does not check parameter validity.
Currently, SimpleBayesianOptimizationCore returns the same point for each worker that asks for one. This will have to be changed.
This also includes implementing working timeouts.
Make them more concise; inline comments, more tests and cleaner structure.
Check implementation of EI.
-> There might be a mistake using variance and std dev interchanged.
-> make it multi dimensional
Add missing docu, p.ex. the init.
See Justin's mail, 2014-11-10:
"Vorneweg: ich faends cool wenn wir gitter [1] benutzen wuerden. Das ist ein persistent chat fuer github repos... das waere evt effizient. Allerdings muss Frederik das glaube ich aufmachen."
Is missing documentation except for next_candidate.
In RandomSearchCore.working, we are currently only checking whether a candidate is in the list, but does not check equality in points.
Document testCandidate.
Candidate requires class documentation.
Write a small tutorial on how to use apsis.
Use lower/upper bound in constructor, or use warping function.
At the moment we optimize EI with scypy.optimize.minimize. Tried several optimizer methods, all of them get stuck in local extrema.
Problem seems to be well known:
Proposed optimization methods there
Other ideas
Install automated documentation system.
to make sure there is consistency between eq and hash.
Init currently assumes four values (lower_bound, upper_bound, minimization_problem and random_state) all exist in the param dict. Add default values and/or errors.
Investigate performance and see better implementations of gaussian cdfs. Needed in AcquisitionFunctions, at the moment, Probability of Improvement.
This means that we can transfer all non-specific optimization code to the general AcqFunction, and just add specific (maybe with an "analytical" keyword) compute_max functions to some AcqFunctions.
Class needs documentation; so does is_better_candidate_as.
From Justin's comment on #6:
I encourage you to stick to 79 characters per line.
There are many reasons for this, e.g.
human eye is much better at reading short lines (that's why newspapers do it like that),
less instructions contained in one line, which is also the atom in version control systems
possibility to view source code side by side
Implement a test framework on pre-computed grid points.
Should also include comparisons between cores, plots and similar things.
When generating five random points, the gp seems to base it only on the last four.
candidate lists in RandomSearchCore are shared among all class instances => create instance in constructor to fix
=> from justins mail:
https://github.com/FrederikDiehl/BayOpt/blob/master/code/apsis/apsis/RandomSearchCore.py#L19
Da steckt nen fieses Problem drin. Und zwar, wird hier finished_candidates
zu einer Klassenvariable. Wenn man da Appends drauf macht, so wird das bei jedem Objekt dieser Klasse sichtbar, weil die sich das alles teilen. Das ist jetzt nicht sonderlich wichtig im Betrieb, da ja in der Regel nur ein Objekt dieser Klasse pro Prozess laeuft. Beim Unittesten kann einen das aber ziemlich beissen und dann sucht man ne ganze Weile...
Loesung:
class blubb(object):
candidates = None
def init(self, ...):
self.candidates = []
Is currently completely undocumented.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.