Giter Site home page Giter Site logo

mdchamp / freelunch Goto Github PK

View Code? Open in Web Editor NEW
46.0 1.0 3.0 311 KB

Meta-heuristic optimisation suite for python

Home Page: https://pypi.org/project/freelunch/

License: MIT License

Python 100.00%
python optimisation-algorithms heuristic-search-algorithms meta-heuristic-algorithms

freelunch's Introduction

FreeLunch - Meta-heuristic optimisation suite for python

PyPICode Tests Benchmark Coverage

Please note the minor changes to the optimiser call signature since 0.0.11, details below.


About

Freelunch is a convenient python implementation of a number of meta-heuristic optimisation (with an 's') algorithms.


Features

Optimisers

--NEW and EXCITING---

  • Particle attractor optimistion freelunch.PAO (pronounced POW)

Your favourite not in the list? Feel free to add it.

  • Differential evolution freelunch.DE
  • Simulated Annealing freelunch.SA
  • Particle Swarm freelunch.PSO
  • Krill Herd freelunch.KrillHerd
  • Self-adapting Differential Evolution freelunch.SADE
  • Quantum Particle Swarm freelunch.QPSO

--Coming soon to 0.1.0--

  • Quantum Bees
  • Grenade Explosion Method
  • The Penguin one

Benchmarking functions

Tier list: TBA

  • N-dimensional Ackley function
  • N-dimensional Periodic function
  • N-dimensional Happy Cat function
  • N-dimensional Exponential function

Install

Install with pip (req. numpy).

pip install freelunch

Usage

Create instances of your favourite meta-heuristics!

import freelunch
opt = freelunch.DE(my_objective_function, bounds=my_bounds) # Differential evolution

Where,

  • obj: objective function that excepts a single argument, the trial vector x, and returns float or None. i.e. obj(x) -> float or None

  • bounds: Iterable bounds for elements of x i.e. bounds [[lower, upper]]*len(sol) where: (sol[i] <= lower) -> bool and (sol[i] >= upper) -> bool.

Running the optimisation

Run by calling the instance. There are several different calling signatures. Use any combination of the arguments below to suit your needs!

To return the best solution only:

quick_result = opt() # (D,)

To return optimum after n_runs:

best_of_nruns = opt(n_runs=n) # (D,)

To return optimum after n_runs in parallel (uses multiprocessing.Pool), see note below.:

best_of_nruns = opt(n_runs=n, n_workers=w, pool_args={}, chunks=1) # (D,)

Return best m solutions in np.ndarray:

best_m = opt(n_return=m) # (D, m)

Return json friendly dict with fun metadata!

full_output = opt(full_output=True)
    # {
    #     'optimiser':'DE',
    #     'hypers':...,
    #     'bounds':...,
    #     'nruns':nruns,
    #     'nfe':1234,
    #     'solutions':[sol1, sol2, ..., solm*n_runs], # All solutions from all runs sorted by fitness
    #     'scores':[fit1, fit2, ..., fitm*n_runs]
    # }

Customisation

Want to change things around?

  • Change the initialisation strategy

Custom initialisation strategies can be supplied by altering the optimiser.initialiser attribute of any optimiser instance. For example:

opt = fr.DE(obj, ...)

def my_initialiser(bounds, N, **hypers):
    'Custom population initialisation'
    # Remember to return and interable of length N
    population = ... # custum logic
    return population

Additionally, some examples of common initialisation strategies can be found in the freelunch.tech module.

  • Change the bounding strategy

The simplest way to do this is to overwrite the optimiser.bounder attribute. There are a number of ready made strategies in freelunch.tech or alternatively define a custom method with the following call signature.

opt = fr.DE(obj, ...)

def my_bounder(p, bounds, **hypers):
    '''custom bounding method'''
    p.dna = ... # custom bounding logic

opt.bounder = my_bounder # overwrite the bounder attribute

# and then call as before
x_optimised = opt()
  • Changing the hyperparameters

Check out the hyperparameters and set your own, (defaults set automatically):

print(opt.hyper_definitions)
    # {
    #     'N':'Population size (int)',
    #     'G':'Number of generations (int)',
    #     'F':'Mutation parameter (float in [0,1])',
    #     'Cr':'Crossover probability (float in [0,1])'
    # }

print(opt.hyper_defaults)
    # {
    #     'N':100,
    #     'G':100,
    #     'F':0.5,
    #     'Cr':0.2
    # }

opt.hypers.update({'N':300})
print(opt.hypers)
    # {
    #     'N':300,
    #     'G':100,
    #     'F':0.5,
    #     'Cr':0.2
    # }

Benchmarks

Access from freelunch.benchmarks for example:

bench = freelunch.benchmarks.ackley(n=2) # Instanciate a 2D ackley benchmark function

fit = bench(sol) # evaluate by calling
bench.bounds # [[-10, 10],[-10, 10]]
bench.optimum # [0, 0] 
bench.f0 # 0.0

A note on running optimisations in parallel.

Because multiprocessing.Pool relies on multiprocessing.forking.pickle to send code to parallel processes, it is imperative that anything passed to the freelunch optimisers can be pickled. For example, the following common python pattern for producing an objective function with a single argument,

method = ... # some methods / args that are requred by the objective function
args = 

def wrap_my_obj(method, args):
    def _obj(x):
        return method(args, x)
    return _obj

obj = wrap_my_obj(method, args)

cannot be pickled because _obj is not importable from the top level module scope and will raise freelunch.util.UnpicklableObjectiveFunction . Instead consider using functools.partial i.e.

from functools import partial

method = ... # some methods / args that are requred by the objective function
args = ...


def _obj(method, args, x):
    return method(args, x)

obj = partial(_obj, method, args)

freelunch's People

Contributors

mdchamp avatar timothyrogers avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

freelunch's Issues

what is a solution anyway?

can probs abstract the solution/particle/bee/ant/penguin/grenade/simp/etc to a separate module?

could call it freelunch.dodgy_analogies??

Fix PSO

Particle swarm is not currently behaving as it should... pesky particles!

Better minimum perfomance testing

Currently the minimum performance testing is on Exponential and Ackley functions from 1-4 dimensions with (pretty arbitrary) telerances on the objective functions.

Running all these tests is also slowing down the CI by quite a bit.

While I think that guarenteeing minimum performances for each optimiser (with default hyperparemters) is a good idea I think a slightly more principled approach is worth investing some time into.

Oviously there is no free lunch so its not possible to enusre optimal performance on different optimisation scenarios. Even so I think that given some fairly reasonable objective functions and tolerances it should be possible to enusre minimum performance.

Optimisation scenarios

  • Convex

Can use the Exponential function in 1, 2, 5 dimensions. Can use a 0.1% tolerance on the minima given default bounds.

  • Multi-modal

Can use the Ackley function in 1, 2, 5 dimensions. Can determine fitness tollerences based on the height of the 2nd deepest minima (ensure within the golobal minima rather than specific fitness target).

  • High-dimensional

Not clear on the best approach for this, exponential is already quite easy. Might leave this out for now.

occasional AttributeError when using SADE

  File "/home/py/.local/lib/python3.10/site-packages/freelunch/base.py", line 58, in __call__
    sols = self.run()
  File "/home/py/.local/lib/python3.10/site-packages/freelunch/optimisers.py", line 121, in run
    F.update()
  File "/home/py/.local/lib/python3.10/site-packages/freelunch/adaptable.py", line 138, in update
    std = self.std if len(self.win_values) == 0 else max(np.std(self.win_values), 10**-3)
AttributeError: 'normally_varying_parameter' object has no attribute 'std'

def op(self):
self.value = np.random.normal(self.u, self.sig)
return self.value
def update(self): # fit normal distribution to successful parameters
u = self.u if len(self.win_values) == 0 else np.mean(self.win_values)
std = self.std if len(self.win_values) == 0 else max(np.std(self.win_values), 10**-3)

it seems to me that self.sig may have accidentally been called self.std in the update method, but I could be wrong.

Coherent objective function sanitisation

Sometimes objective functions behave badly and do not return friendly types...

My personal thinking is that the default invalid fitness should be None and that all other types np.nan, np.inf, strings etc. should all be cast to None and then have the selection algorithms be aware of this.

A simple way to do this is to wrap the objective function in something that filters the return?

Need to also make sure that the selection algorithms are ok to handle None values for fitness. For example what to do in the None vs. None case?

Increased Flexibitility in Bounding

Currently all methods hardcode sticky bounds for the optimisation space. It would be better if this were instead a call to a user definable function (with defaults) which was a bounding method of the base class.

TODO:

  • Replace current sticky bounds calls with abstract call to self.apply_bounds(pop) -> pop
  • Provide default and example bounding strategies, first move sticky bounds to new pattern
  • Provide template class for bounding strategy

Make initialisation more like bounding

I was really pleased with the bounding strategy changes, and I think that would be beneficial to have initialisation work in a similar way.

Should have:

  • default class method of optimiser that calls a static method bound to an attribute
  • overwritable by the end user or by inheriting optimisers
  • init strategies to include as defaults: uniform, normal, fixed
  • Include the new functionality in the readme

Should be an evenings work to write up and test.

How to handle submethods in the optimisers

Some algorithms have functions that are essentially hyperparameters for example the neighbour function in SA or the mutation operations in DE and SADE.

Need to decide a way to permit users to select these functions both from a list of implemented ones and also provide their own implementations.

I think the ideal solution would be a modification to the base class that requires as little per-optimisation implementation as possible,

Remove codecov.io dependency for coverage tests.

I don't feel that the codecov.io api is adding very much. Especially seeing as the coverage reports in my IDE are the ones I'm actually looking at for finding missed lines. It would also be good to have a more precise control over exactly what causes a failed test.

Thinking:

  • Ensure 90% coverage every file in src.
  • Ensure 95% coverage in overall repo
  • Comment coverage table to PR (don't want spam though...)
  • Ignore test directories / files
  • Report state to a badge on README

Travis' retirement party

GH actions handles the testing racket now...

need to figure out how to... unlink... Travis from the project...

Multiproc option for mruns

I would be a really nice QOL improvement to add high level control for a multiproc wrapper over multiple runs of the optimisers. This would make optimisation on cluster / hpc a lot more powerful / less janky than it is currently.

The calling syntax would be something like:

best_of_runs = opt(nruns=n_runs, nworkers=n_workers, pool_args={})

and I think using multiprocessing.pool.imap under the hood would be a good way to go.

This might require a bit of a rework to base.optimiser.call() but shouldn't be too bad to keep the calling syntax the same and still add the functionality / return structure.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.