Giter Site home page Giter Site logo

um-lpm / ears Goto Github PK

View Code? Open in Web Editor NEW
17.0 5.0 19.0 214.08 MB

EARS is a free and open-source Java-based framework for ranking, developing, and experimenting with single- and multi-objective evolutionary algorithms

License: GNU General Public License v3.0

Java 100.00%
evolutionary-algorithms optimization-algorithm optimization optimization-problem metaheuristic-framework framework ranking rating-system

ears's Introduction

EARS - Evolutionary Algorithm Rating System

EARS is a free and open-source Java-based framework for ranking, developing and experimenting with single- and multi-objective evolutionary algorithms. The framework can be used for any optimization algorithm and is not limited just to evolutionary algorithms. EARS enables an easy and reliable way to rate and rank optimization algorithms. In the framework, a large set of implemented optimization algorithms and test problems are already provided.

EARS used in an online competition

Included features

  • Multi-objective evolutionary algorithms (NSGA-II, NSGA-III, GDE3, PAES, PESA2, SPEA2, IBEA, OMOPSO, MOEA/D).
  • Single-objective evolutionary algorithms (ABC, CRO, DE, FWA, GOA, GWO, ICA, JADE, PSO, TLBO).
  • A wide variety of quality indicators (CS, EPS, IGD, IGD+, GD, MS, HV, R1, R2, R3, SPREAD, ER).
  • Benchmarks (CEC2005, CEC2009, CEC2010, CEC2011, CEC2014, CEC2015, ZDT, DTLZ, WFG).
  • Many common test functions (Ackley, Booth, Griewank, Rastrigin, Schaffer, Schwefel, Sphere ...).
  • Experiments, examples and prepared benchmarks.

How to use

The latest release including the .jar file can be downloaded from the releases page.

Intellij

First Download EARS project

  • Go to: File - > New -> Project from Version Control -> Git
  • Set URL to https://github.com/UM-LPM/EARS.git
  • Select the directory where you want to save the project (example: "path to my projects/EARS")
  • Import Gradle project

Add EARS Project as dependency

If you want to add EARS as dependency to your project, you can do it by following these steps:

  • Create new project (select GRADLE) and set EARS Project dependency

  • In file settings.gradle add:

Groovy:

include ':EARS'
project(':EARS').projectDir = new File('path to EARS') // example: new File('../my_projects/EARS')  or project.projectDir.path + "/../EARS"

Kotlin:

include(":EARS")
project(":EARS").projectDir = File("path to EARS")
  • In file build.gradle add:

Groovy:

dependencies {
    implementation project(':EARS')
}

Kotlin:

dependencies {
    implementation(project(":EARS"))
}

Tips

  • If you have a particular representation for your solution, create your own by extending the Solution class in EARS: class MySolution extends Solution.
  • Code examples can be found in the package org.um.feri.ears.examples.
  • All information about the given problem (dimensions, constraints, bounds, etc...) can be obtained from the Problem object which is accessible from the Task object (e. g. task.problem.getLowerLimit()).
  • Before every evaluation, check if the stopping criterion is reached by calling task.isStopCriterion(). If evaluate is called after the stopping criterion is reached, a StopCriterionException will be thrown.

Examples

Implementing a custom algorithm by extending the Algorithm class:

import org.um.feri.ears.algorithms.NumberAlgorithm;
import org.um.feri.ears.algorithms.AlgorithmInfo;
import org.um.feri.ears.algorithms.Author;
import org.um.feri.ears.problems.DoubleProblem;
import org.um.feri.ears.problems.NumberSolution;
import org.um.feri.ears.problems.StopCriterionException;
import org.um.feri.ears.problems.Task;

public class RandomWalkAlgorithm extends NumberAlgorithm { // needs to extend NumberAlgorithm
    private NumberSolution<Double> best; // used to save global best solution

    public RandomWalkAlgorithm() {
        ai = new AlgorithmInfo("RW", "Random Walk", "");
        au = new Author("Matej", "[email protected]");
    }

    @Override
    public NumberSolution<Double> execute(Task<NumberSolution<Double>, DoubleProblem> task) throws StopCriterionException { // method which executes the algorithm
        // the task object holds information about the stopping criterion
        // and information about the problem (number of dimensions, number of constraints, upper and lower bounds...)
        NumberSolution<Double> newSolution;
        best = task.getRandomEvaluatedSolution();  // generate new random solution (number of evaluations is incremented automatically)
        // to evaluate a custom solution or an array use task.eval(mySolution)
        while (!task.isStopCriterion()) { // run until the stopping criterion is not reached

            newSolution = task.getRandomEvaluatedSolution();
            if (task.problem.isFirstBetter(newSolution, best)) { // use method isFirstBetter to check which solution is better (it checks constraints and considers the type of the problem (minimization or maximization))
                best = newSolution;
            }
            task.incrementNumberOfIterations(); // increase the number of generations for each iteration of the main loop
        }
        return best; // return the best solution found
    }

    @Override
    public void resetToDefaultsBeforeNewRun() {
    }
}

Executing a single Task:

import org.um.feri.ears.algorithms.NumberAlgorithm;
import org.um.feri.ears.algorithms.so.de.DEAlgorithm;
import org.um.feri.ears.problems.*;
import org.um.feri.ears.problems.unconstrained.Sphere;
import org.um.feri.ears.util.random.RNG;

public class SOSingleRun {

    public static void main(String[] args) {

        //RNG.setSeed(100); // set a specific seed for the random generator

        DoubleProblem problem = new Sphere(5); // problem Sphere with five dimensions

        Task sphere = new Task(problem, StopCriterion.EVALUATIONS, 10000, 0, 0); // set the stopping criterion to max 10000 evaluations

        NumberAlgorithm alg = new DEAlgorithm(DEAlgorithm.Strategy.JDE_RAND_1_BIN);
        NumberSolution<Double> best;
        try {
            best = alg.execute(sphere);
            System.out.println("Best solution found = " + best); // print the best solution found after 10000 evaluations
        } catch (StopCriterionException e) {
            e.printStackTrace();
        }
    }
}

To perform a tournament you need more than one algorithm (player) and more than one task (problem) in the benchmark:

import org.um.feri.ears.algorithms.NumberAlgorithm;
import org.um.feri.ears.algorithms.so.abc.ABC;
import org.um.feri.ears.algorithms.so.gwo.GWO;
import org.um.feri.ears.algorithms.so.jade.JADE;
import org.um.feri.ears.algorithms.so.random.RandomWalkAlgorithm;
import org.um.feri.ears.algorithms.so.tlbo.TLBOAlgorithm;
import org.um.feri.ears.benchmark.Benchmark;
import org.um.feri.ears.benchmark.RPUOed30Benchmark;
import org.um.feri.ears.util.random.RNG;

import java.util.ArrayList;

public class SOBenchmarkExample {

    public static void main(String[] args) {
        RNG.setSeed(System.currentTimeMillis()); //set the seed of the random generator
        Benchmark.printInfo = false; //prints one on one results
        //add algorithms to a list

        ArrayList<NumberAlgorithm> algorithms = new ArrayList<NumberAlgorithm>();
        algorithms.add(new ABC());
        algorithms.add(new GWO());
        algorithms.add(new TLBOAlgorithm());
        algorithms.add(new RandomWalkAlgorithm());
        algorithms.add(new JADE());

        RPUOed30Benchmark rpuoed30 = new RPUOed30Benchmark(); // benchmark with prepared tasks and settings

        rpuoed30.addAlgorithms(algorithms);  // register the algorithms in the benchmark

        rpuoed30.run(5); //start the tournament with 10 runs/repetitions
    }
}

To use ExpBas, the user has to define ExpBas setting (see main method) and modify the implementation of their optimization algorithm (see runDE method) accordingly:

  • To check if the algorithm was in the exploration phase or not (then it is in the exploitation phase), pass reference solutions, new solution and setting to method ExpBas.isExploration(...).
  • The user has to count exploration phases manually for each newly created solution (exploration is assumed for random solutions)
  • At the end user has to divide the number of exploration phases with the number of evaluations to get the expbas measure
import org.apache.commons.lang3.ArrayUtils;
import org.um.feri.ears.problems.Problem;
import org.um.feri.ears.problems.unconstrained.*;
import org.um.feri.ears.util.Util;
import java.util.*;

public class ExpBasExample {
    
    public static void main(String[] args) {
        Problem problem = new Rastrigin(2); // Get some problem
        
        // First initialize settings!
        ExpBas.ExpBasSettings setting = new ExpBas.ExpBasSettings(problem, 0.1); // Second parameter is in percentages and decides the step per each dimension
        setting.lsVariant = ExpBas.LocalSearchVariant.DeterministicBestImprovement;
        setting.nType = ExpBas.NeighbourhoodType.Sparse;
        setting.nOrder = ExpBas.NeighbourhoodOrder.NonReversed;
        
        Object[] result = runDE(problem, setting); // Pass this setting to some metaheuristic
        // Print your results
        System.out.println("Best fitness: " + result[1] + " -> ExpBas: " + result[2]);
        System.out.print("Solution: [ ");
        for(int i = 0; i < problem.getNumberOfDimensions(); i++) {
            ArrayList<Double> solution = (ArrayList<Double>) (result[0]);
            System.out.print(solution.get(i).toString() + " ");
        }
        System.out.println("]");
    }

    // Simple implementation of DE - EXAMPLE
    public static Object[] runDE(Problem problem, final ExpBas.ExpBasSettings setting){
        double F = 0.5;
        double CR = 0.9;
        int NP = 25;
        int maxFES = problem.getNumberOfDimensions() * 1000;

        ArrayList<ArrayList<Double>> population = new ArrayList<>();
        ArrayList<Double> populationFitnesses = new ArrayList<>();
        population.add(new ArrayList<>(Arrays.asList(ArrayUtils.toObject(problem.getRandomVariables()))));
        populationFitnesses.add(problem.eval(population.get(0)));
        ArrayList<Double> bestSolution = new ArrayList<>(population.get(0));
        double bestFitness = populationFitnesses.get(0);

        ExpBas metric = new ExpBas(); // Initialize EXPBAS
        int explorationPhases = 1; // Exploration is assumed for random solutions
        int cntFES = 1;

        // Initialize population
        for(int i = 1; i < NP; i++){
            population.add(new ArrayList<>(Arrays.asList(ArrayUtils.toObject(problem.getRandomVariables()))));
            explorationPhases++; // Exploration is assumed for random solutions
            populationFitnesses.add(problem.eval(population.get(i)));
            cntFES++;
            if(populationFitnesses.get(i) < bestFitness){
                bestFitness = populationFitnesses.get(i);
                bestSolution = new ArrayList<>(population.get(i));
            }
        }

        // Main loop
        while(cntFES < maxFES){
            for(int i = 0; i < NP; i++){
                // Mutation
                Set<Integer> mySet = new LinkedHashSet<Integer>();
                while (mySet.size() < 3) {
                    mySet.add(org.um.feri.ears.util.RNG.nextInt(NP));
                }
                List<Integer> uniqInds = new ArrayList<Integer>();
                uniqInds.addAll(mySet);

                ArrayList<Double> v = new ArrayList<>();
                for(int j = 0; j < problem.getNumberOfDimensions(); j++){
                    v.add(population.get(uniqInds.get(0)).get(j) + F * (population.get(uniqInds.get(1)).get(j) - population.get(uniqInds.get(2)).get(j)));
                }

                // Crossover
                ArrayList<Double> u = new ArrayList<>();
                for(int j = 0; j < problem.getNumberOfDimensions(); j++){
                    if(org.um.feri.ears.util.RNG.nextDouble() <= CR || i == RNG.nextInt(NP))
                        u.add(v.get(j));
                    else
                        u.add(population.get(i).get(j));
                }

                // To find out if exploration or exploitation we have to pass reference solutions and new solution 
                // Since in DE 4 solutions are used to create new solution, these solutions are reference solutions
                ArrayList<ArrayList<Double>> referenceSols = new ArrayList<>();
                referenceSols.add(new ArrayList<>(population.get(uniqInds.get(0))));
                referenceSols.add(new ArrayList<>(population.get(uniqInds.get(1))));
                referenceSols.add(new ArrayList<>(population.get(uniqInds.get(2))));
                referenceSols.add(new ArrayList<>(population.get(i)));

                boolean isRandom = false;  // Flag remembers if new solution is created randomly
                if(!problem.isFeasible(u)){
                    u = new ArrayList<>(Arrays.asList(ArrayUtils.toObject(problem.getRandomVariables())));
                    explorationPhases++;  // Exploration is assumed for random solutions
                    isRandom = true;
                }
                double uFitness = problem.eval(u);
                cntFES++;

                // Selection
                if(uFitness < populationFitnesses.get(i)){
                    population.set(i, u);
                    populationFitnesses.set(i, uFitness);
                }

                if(uFitness < bestFitness){
                    bestSolution = new ArrayList<>(u);
                    bestFitness = uFitness;
                }
                if(!isRandom){
                    // In case new solution was not generated randomly, we call our metric and check if exploration
                    // We pass reference solutions, new solution and concrete ExpBas setting
                    if(metric.isExploration(referenceSols, u, setting)){
                        explorationPhases++;
                    }
                }
            }
        }
        // Total ExpBas is calculated by the number of exploration phases divided by number of evaluated individuals.
        double expbas = (double) explorationPhases / (double) cntFES;
        return new Object[] { bestSolution, bestFitness, expbas };
    }
}

Publications

N. Veček, M. Mernik, and M. Črepinšek. "A chess rating system for evolutionary algorithms: A new method for the comparison and ranking of evolutionary algorithms." Information Sciences 277 (2014): 656-679. http://www.sciencedirect.com/science/article/pii/S002002551400276X

N. Veček, M. Črepinšek, M. Mernik, D. Hrnčič. "A comparison between different chess rating systems for ranking evolutionary algorithms." 2014 Federated Conference on Computer Science and Information Systems. IEEE, 2014. https://ieeexplore.ieee.org/abstract/document/6933058

N. Veček, M. Mernik, B. Filipič, and M. Črepinšek. "Parameter tuning with Chess Rating System (CRS-Tuning) for meta-heuristic algorithms." Information Sciences 372 (2016): 446-469. https://www.sciencedirect.com/science/article/pii/S0020025516306430

N. Veček, M. Črepinšek, and M. Mernik. "On the influence of the number of algorithms, problems, and independent runs in the comparison of evolutionary algorithms." Applied Soft Computing 54 (2017): 23-45. https://doi.org/10.1016/j.asoc.2017.01.011

N. Veček, S. H. Liu, M. Črepinšek, and M. Mernik. "On the importance of the artificial bee colony control parameter ‘limit’." Information Technology And Control 46.4 (2017): 566-604. http://www.itc.ktu.lt/index.php/ITC/article/view/18215

M. Ravber, M. Mernik, and M. Črepinšek. "The impact of quality indicators on the rating of multi-objective evolutionary algorithms." Applied Soft Computing 55 (2017): 265-275. https://www.sciencedirect.com/science/article/pii/S1568494617300534

M. Ravber, M. Mernik, and M. Črepinšek. "Ranking multi-objective evolutionary algorithms using a chess rating system with quality indicator ensemble." 2017 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2017. https://ieeexplore.ieee.org/abstract/document/7969481

M. Ravber, M. Črepinšek, M. Mernik, and T. Kosar. "Tuning Multi-Objective Optimization Algorithms for the Integration and Testing Order Problem." International Conference on Bioinspired Methods and Their Applications. Springer, Cham, 2018. https://link.springer.com/chapter/10.1007/978-3-319-91641-5_20

M. Jesenik, M. Mernik, M. Črepinšek, M. Ravber, and M. Trlep. "Searching for soil models’ parameters using metaheuristics." Applied Soft Computing 69 (2018): 131-148. https://doi.org/10.1016/j.asoc.2018.04.045

M. Črepinšek, M. Ravber, M. Mernik, and T. Kosar. "Tuning Multi-Objective Evolutionary Algorithms on Different Sized Problem Sets." Mathematics, 7.9 (2019): 824. https://www.mdpi.com/2227-7390/7/9/824

M. Črepinšek, S. H. Liu, M. Mernik, and M. Ravber. "Long term memory assistance for evolutionary algorithms." Mathematics, 7(11) (2019): 1129. https://www.mdpi.com/2227-7390/7/11/1129

M. Ravber, Ž. Kovačević, M. Črepinšek, and M. Mernik. "Inferring Absolutely Non-Circular Attribute Grammars with a Memetic Algorithm." Applied Soft Computing 100 (2021), 106956. https://www.sciencedirect.com/science/article/abs/pii/S1568494620308942

J. Jerebic, M. Mernik, S. Liu, M. Ravber, M. Baketarić, L. Mernik, and M. Črepinšek. "A novel direct measure of exploration and exploitation based on attraction basins." Expert Systems with Applications 167 (2021), 114353. https://doi.org/10.1016/j.eswa.2020.114353

The authors acknowledge the financial support from the Slovenian Research Agency (research core funding No. P2-0041 COMPUTER SYSTEMS, METHODOLOGIES, AND INTELLIGENT SERVICES) http://p2-0041.feri.um.si/en/

ears's People

Contributors

galme avatar lpm-user avatar mihael-mika avatar mir1001 avatar ravby avatar smidmarko avatar zskamljic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ears's Issues

Cannot find symbol

Cannot find symbol:

  • Symbol: method addDummyTask(java.lang.String)
  • Location: (318, 27), variable dr of type org.um.feri.ears.benchmark.DummyRating

Build system

I have problems building this project, because some tools do not recognize this project correctly. Please use some build tool like Gradle to ease the problem of project initialization and project building.

RatingBenchmark comparator can crash when there are many players

Under certain circumstances, RatingBenchmark can crash when there are many competing players with this exception:

Exception in thread "main" java.lang.IllegalArgumentException: Comparison method violates its general contract!
	at java.util.TimSort.mergeLo(TimSort.java:777)
	at java.util.TimSort.mergeAt(TimSort.java:514)
	at java.util.TimSort.mergeCollapse(TimSort.java:441)
	at java.util.TimSort.sort(TimSort.java:245)
	at java.util.Arrays.sort(Arrays.java:1512)
	at java.util.ArrayList.sort(ArrayList.java:1454)
	at java.util.Collections.sort(Collections.java:175)
	at org.um.feri.ears.benchmark.RatingBenchmark.setWinLoseFromResultList(RatingBenchmark.java:164)
	at org.um.feri.ears.benchmark.RatingBenchmark.run(RatingBenchmark.java:222)
	at CrashTest.main(CrashTest.java:38)

Code to reproduce:
(Slightly modified code of the MainBenchmarkTest example that just adds single algorithm 100x to simulate many players)

Util.rnd.setSeed(20);

ArrayList<Algorithm> players = new ArrayList<Algorithm>();
for (int i = 0; i < 100; i++) {
    Algorithm algorithm = new ES1p1sAlgorithm();
    // Give each instance different name, so they don't clash.
    algorithm.setAlgorithmInfo(new AlgorithmInfo("", "", Integer.toString(i), ""));
    players.add(algorithm);
}

ResultArena ra = new ResultArena(100);
RatingRPUOed2 suopm = new RatingRPUOed2(); //Create banchmark
suopm.setDisplayRatingIntervalChart(false);
for (Algorithm al:players) {
    ra.addPlayer(al.getID(), 1500, 350, 0.06,0,0,0); //init rating 1500
    suopm.registerAlgorithm(al);
}
BankOfResults ba = new BankOfResults();
suopm.run(ra, ba, 20); //repeat competition 20X

Constraints definition

I would like to have an option to set a constraint as a function. Constraint function should be defined as c(A: Array[Type]) -> Type.

TaskComparator.compare throws NullPointerException

When testing the example (from project readme) - MainBenchMarkTest - the program sometimes randomly crashed. This issue can be 100% reproduced if random seed at the beginning of main body is set to 3
( Util.rnd.setSeed(3) ).

Update single objective intervals for example in ProblemSphere

upperLimit = new ArrayList(Collections.nCopies(numberOfDimensions, 200.0));
->upperLimit = new ArrayList(Collections.nCopies(numberOfDimensions, 100.0));

public ProblemSphere(int d) {
super(d,0);

    lowerLimit = new ArrayList<Double>(Collections.nCopies(numberOfDimensions, -100.0));
    upperLimit = new ArrayList<Double>(Collections.nCopies(numberOfDimensions, 200.0));

    name = "Sphere";
}

Missing methods and different return types

The error is reported in org.um.feri.ears.engine.GenerateBenchmarksFile. As far as I can see the following is missing:

  • getNumberOfRuns()
  • getStopCriteria()
  • getStopCondition()
  • getProblems()

RatingRPUOed2 returns a different value than expected in org.um.feri.ears.engine.BenchmarkRunner, it returns TaskBase instead of Task

DoubleMOTask task problem

  • Error example 1
    import org.um.feri.ears.algorithms.moo.demo.D_DEMO
    ...
    val moead = new D_MOEAD(1000)
    val demo = new D_DEMO(1000, 1)
    ...
    val task = new DoubleMOTask(EnumStopCriteria.ITERATIONS, 10000, 10000, 10000, 0.000001, myProblem);
    ...
    val r_moead = moead.execute(task)
    val r_demo = demo.execute(task)
    
    Returns:
    Exception in thread "main" java.lang.NullPointerException
      at org.um.feri.ears.util.Ranking.<init>(Ranking.java:71)
      at org.um.feri.ears.algorithms.MOAlgorithm.execute(MOAlgorithm.java:188)
    
  • Error example 2
    import org.um.feri.ears.algorithms.moo.demo.D_DEMO
    ...
    val moead = new D_MOEAD(1000)
    val demo = new D_DEMO(1000, 1)
    ...
    val task = new DoubleMOTask(EnumStopCriteria.ITERATIONS, 10000, 10000, 10000, 0.000001, myProblem);
    ...
    val r_demo = demo.execute(task)
    val r_moead = moead.execute(task)
    
    Returns:
    Exception in thread "main" java.lang.IndexOutOfBoundsException: Index out of Bound 928
      at org.um.feri.ears.problems.moo.ParetoSolution.get(ParetoSolution.java:155)
      at org.um.feri.ears.algorithms.moo.moead.MOEAD.start(MOEAD.java:208)
      at org.um.feri.ears.algorithms.moo.moead.D_MOEAD.start(D_MOEAD.java:28)
      at org.um.feri.ears.algorithms.MOAlgorithm.execute(MOAlgorithm.java:184)
    

CPU Stop criteria for single run is not working!

In case of:

public class Main4Run {

public static void main(String[] args) {
	Task t = new Task(EnumStopCriteria.CPU_TIME, 5000, 1000, 0, 0.0001, new Sphere(5)); //run problem Sphere Dimension 5, 3000 evaluations
	RandomWalkAlgorithm test = new RandomWalkAlgorithm();
	try {
		System.out.println(test.execute(t)); //prints best result afrer 3000 runs
	} catch (StopCriteriaException e) {
		e.printStackTrace();
	}
}

}
CPU Time is not working for single run!

java.lang.IllegalArgumentException: Comparison method violates its general contract

Exception in thread "main" java.lang.IllegalArgumentException: Comparison method violates its general contract!
	at java.util.TimSort.mergeHi(Unknown Source)
	at java.util.TimSort.mergeAt(Unknown Source)
	at java.util.TimSort.mergeForceCollapse(Unknown Source)
	at java.util.TimSort.sort(Unknown Source)
	at java.util.Arrays.sort(Unknown Source)
	at java.util.ArrayList.sort(Unknown Source)
	at java.util.Collections.sort(Unknown Source)
	at org.um.feri.ears.benchmark.RatingBenchmark.setWinLoseFromResultList(RatingBenchmark.java:161)
	at org.um.feri.ears.benchmark.RatingBenchmark.run(RatingBenchmark.java:219)
	at MainBenchMarkTest.TEST_ALGO(MainBenchMarkTest.java:53)
	at MainBenchMarkTest.main(MainBenchMarkTest.java:88)

When running the following configuration I get this exception printed above. (It takes some time, but it will be 100% reproducible)

TEST_ALGO(5, 10, 70, 75, 1508185232848L, 2);
public static void TEST_ALGO(int from_f, int to_f, int from_c, int to_c, long seed, int repeat)
	{
		System.out.println("seed: "+seed);
		Util.rnd.setSeed(seed);

		ArrayList<Algorithm> players = new ArrayList<Algorithm>();
		players.add(new DEE007(10,0.7,0.89,"CrepA"));
		players.add(new DEE007(10,0.71,0.79,"CrepB"));
		if(true)
		{
			for(int pop = 10; pop <= 100; pop += 10)
			{
				for(int c = from_c; c <= to_c; c+=1)
				{
					for(int f = from_f; f <= to_f; f+=1)
					{
						double cParam = c/100.0d;
						double fParam = f/100.0d;
						if(cParam > 1)
							cParam = 1;
						if(fParam > 1)
							fParam = 1;
						String aname = "{{"+Integer.toString(pop)+","+Double.toString(cParam)+","+Double.toString(fParam)+"}}";
					    Algorithm algorithm = new DEE007(pop, cParam, fParam, "new_Dominik"+aname);
					    players.add(algorithm);
					}
				}
			}
		}
		ResultArena ra = new ResultArena(100);
		RatingRPUOed2 suopm = new RatingRPUOed2(); //Create banchmark
		suopm.setDisplayRatingIntervalChart(false);
		for (Algorithm al:players) {
		    ra.addPlayer(al.getID(), 1500, 350, 0.06,0,0,0); //init rating 1500
		    suopm.registerAlgorithm(al);
		}
		BankOfResults ba = new BankOfResults();
		suopm.run(ra, ba, repeat); 
		ArrayList<Player> list = ra.recalcRatings();
		//for(int i=0;i < 10 && i < list.size();i++) {
			System.out.println(list.get(0)); //print rangs
		//}
	}

import java.util.ArrayList;

import org.um.feri.ears.algorithms.Algorithm;
import org.um.feri.ears.algorithms.AlgorithmInfo;
import org.um.feri.ears.algorithms.Author;
import org.um.feri.ears.problems.DoubleSolution;
import org.um.feri.ears.problems.StopCriteriaException;
import org.um.feri.ears.problems.Task;
import org.um.feri.ears.util.Util;

public class DEE007 extends Algorithm {
    int pop_size;
    double CR, F;
	ArrayList<DoubleSolution> pop;
	DoubleSolution best;
	//Initialize all agents {\displaystyle \mathbf {x} } \mathbf {x}  with random positions in the search-space.
	public DEE007(int ps, double CR, double F, String s) {
		pop_size = ps;
		this.CR = CR;
		this.F = F;
		ai = new AlgorithmInfo("","",s,s);  //EARS add algorithm name
		au =  new Author("E007", "N/A"); //EARS author info

	}
	public void init(Task taskProblem) throws StopCriteriaException {
		pop = new ArrayList<>();
		DoubleSolution tmp;
		for (int i=0; i<pop_size;i++) {
			if (taskProblem.isStopCriteria()) break;
			tmp = taskProblem.getRandomSolution();
			if (i==0) best = tmp;
			else if (taskProblem.isFirstBetter(tmp, best)) best = tmp;
			pop.add(tmp);
		}
	   	
	}
	
	@Override
	public DoubleSolution execute(Task taskProblem) throws StopCriteriaException {
		init(taskProblem);
		int a, b, c, R;
		DoubleSolution yEval;
		while (!taskProblem.isStopCriteria()) {
			for (int i=0; i<pop_size;i++) {
				if (taskProblem.isStopCriteria()) break;
				do 
				  a = Util.rnd.nextInt(pop_size);
				while (a==i);
				do 
					  b = Util.rnd.nextInt(pop_size);
				while ((b==i)||(b==a));
				do 
					  c = Util.rnd.nextInt(pop_size);
				while ((c==i)||(c==a)||(c==b));
				R = Util.rnd.nextInt(taskProblem.getNumberOfDimensions());
				double y[] = new double[taskProblem.getNumberOfDimensions()];
				for (int j=0; j<taskProblem.getNumberOfDimensions(); j++) {
					if ((Util.nextDouble()<CR) || (j==R)) {
						y[j] = taskProblem.setFeasible(pop.get(a).getDoubleVariables()[j]+F*(pop.get(b).getDoubleVariables()[j]-pop.get(c).getDoubleVariables()[j]),j);
					} else y[j] = pop.get(i).getDoubleVariables()[j];
				}
				yEval = taskProblem.eval(y);
				if (taskProblem.isFirstBetter(yEval, pop.get(i))){
					pop.set(i, yEval);
					if (taskProblem.isFirstBetter(yEval, best)) best = yEval;
				}
			}
		}
		return best;
	}

	@Override
	public void resetDefaultsBeforNewRun() {
		// TODO Auto-generated method stub
		
	}

}

Quality indicators reference points

  • Resources
    Class QualityIndicator in package org.um.feri.ears.qualityIndicator should use ClassLoader.getResource(path_to_file_in_package)); in lines 148 and 162.
  • Reference points for other problems/tasks
    I would like to see the quality of two pareto fronts from two different MOO algorithms that where run on my problem/task. Some indicators have this options, because they are binary comparison. Indicators that use unary comparison do not have this option.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.