Giter Site home page Giter Site logo

deustotech / dycon-toolbox Goto Github PK

View Code? Open in Web Editor NEW
2.0 2.0 2.0 268.25 MB

The dycon platform is a collection of common tools for the investigation of differential equations, in the context of the development of the Dycon project.

Home Page: http://cmc.deusto.eus/dycon/

MATLAB 88.48% XSLT 11.52%

dycon-toolbox's People

Contributors

azaharmonge avatar djoroya avatar druizb avatar narmko avatar spascual232 avatar ubiccari avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar

dycon-toolbox's Issues

Problem in the code, not descending

A problem in the Code:

- I run the code
- I reached the maximum of iterations

===================OUTPUT_TERMINAL==========================

Warning: Max iteration number reached!!

In ControlProblem/GradientMethod (line 134)
In semilinear (line 116)

=======================================================

- The step in the gradient has produeced and increment in teh functional
- I put also here the workspace because the simulation can take a whileT
- The code gave no error and "worked" well

plots2b
plot2b

CODE in txt instead of .m because github does not allow me to upload it in other formats

semilinear.txt

THIS IS THE WORKSPACE, I CHANGED .mat FOR .txt IN ORDER TO UPLOAD IT

2bug.txt

AdaptativeDescent

LA OPCIÓN MIDDLE STEP CONTROL NO TIENE SENTIDO, NO LA HE ESCRITO
SIN MIDDLE STEP CONTROL NO HAY PASO ADAPTATIVO NO?
HE ESCRITO ESO DE LA APROXIMACIÓN DE LA DERIVADA Y DEL ADJUNTO, AUNQUE NO LO HE ENCONTRADO EN EL LIBRO NI ENTIENDO BIEN PORQUÉ ÉS UNA APROXIMACIÓN (EL P)

This method is used within the GradientMethod method. GradientMethod executes iteratively this rutine in order to get one update of the control in each iteration. In the case of choosing AdaptativeDescent this function updates the control of the following way:

$$u_{old}=u_{new}-\alpha_k dJ$$

where dJ is an approximation of the gradient of J that has been obtained considering the adjoint state of the optimality condition of Pontryagin principle. The optimal control problem is defined by
$$\min J=\min\Psi (t,Y(T))+\int^T_0 L(t,Y,U)dt$$

subject to:

$$\frac{d}{dt}Y=f(t,Y,U).$$

The gradient of $$J$$ is:

$$dJ=\partial_u H=\partial_uL+p\partial_uf$$

An approximation $$p$$ is computed using:

$$-\frac{d}{dt}p = f_Y (t,Y,U)p+L_Y(Y,U)$$

$$ p(T)=\psi_Y(Y(T))$$

Since one the expression of the gradient, we can start with an initial control, solve the adjoint problem and evaluate the gradient. Then one updates the initial control in the direction of the approximate gradient with a step size $$\alpha_k$$. $$\alpha_k$$ is determined by a small variation of the Armijo stepsize rule. In each iteration the algorithm multiplies the stepsize by to $$\alpha_k=2\alpha_{k-1}$$ and checks if $$J(y_k,u_k)<J(y_{k-1},u_{k-1})$$ in case to be true continues to the next iteration, in case to be false it devides by two the stepsize until the condition is fulfilled or the minimum stepsize is reached.
In this routine the user has to choose the minimum step size.

This routine will tell to GradientMethod to stop when the minimum tolerance of the derivative (or the relative error, user's choice) is reached. Moreover there is a maximum of iterations allowed.

MANDATORY INPUTS:

NAME: iCP
DESCRIPTION: Control problem object, it carries all the information about the dynamics, the functional to be minimized and moreover the updates of the current best control find so far.
CLASS: ControlProblem

NAME: tol
DESCRIPTION: the tolerance desired.
CLASS: double

OPTIONAL INPUT PARAMETERS

NAME: InitialLengthStep
DESCRIPTION: This paramter is the length step of the gradient method that is going to be used at the begining of the process. By default, this is 0.1.
CLASS: double

NAME: MinLengthStep
DESCRIPTION: This paramter is the lower bound on the length step of the gradient method if the algorithm needs to have a step size lower than this size it will make the GradientMethod stop.
CLASS: double

OUTPUT PARAMETERS:

All the updates will be carried inside the iCP control problem object.

Name: Unew
Description: Update of the Control
class: a vector valued function in a form of a double matrix

Name: Ynew
Description: Update of State Vector
class: a vector valued function in a form of a double matrix

Name: Jnew
Description: New Value of functional
class: double

Name: dJnew
Description: New Value of gradient
class: a vector valued function in a form of a double matrix

Name: error
Description: Nthe error |dJ|/|U| or |dJ| depending on the choice of the user.
class: double

Name: stop
Description: if this parameter is true, the routine will tell to GradientMethod to stop
class: logical

Citations:

[1] Cohen, William C.,Optimal control theory—an introduction, Donald E. Kirk, Prentice Hall, Inc., New York (1971), 452 poges. $13.50,https://onlinelibrary.wiley.com/doi/abs/10.1002/aic.690170452

ConjugateGradientDescent

HE MIRADO EL CODIGO Y NO VEO DÓNDE SE TOMA LA DIRECCIÓN CONJUGADA NO LINEAL. EN EL CODIGO SOLO SE OPTIMIZA EN LA LONGITUD DE PASO PERO NO SE TOMA UNA DIRECCIÓN DIFERENTE A LA DEL GRADIENTE NO? EN MAYÚSCULAS MAS ABAJO HE ESCRITO EL MISMO COMENTARIO DÓNDE TENIA QUE EXPLICAR LA DIRECCIÓN DE DESCENSO

ConjugateGradientDescent

This method is used within the GradientMethod method. GradientMethod executes iteratively this rutine in order to get one update of the control in each iteration. In the case of choosing ConjugateGradientDescent this function updates the control of the following way:

$$u_{old}=u_{new}-\alpha_k s_k$$

where $$s_k$$ is the descent direction.
The optimal control problem is defined by
$$\min J=\min\Psi (t,Y(T))+\int^T_0 L(t,Y,U)dt$$

subject to:

$$\frac{d}{dt}Y=f(t,Y,U).$$

The gradient of $$J$$ is:

$$dJ=\partial_u H=\partial_uL+p\partial_uf$$

An approximation $$p$$ is computed using:

$$-\frac{d}{dt}p = f_Y (t,Y,U)p+L_Y(Y,U)$$

$$ p(T)=\psi_Y(Y(T))$$

Since one the expression of the gradient, we can start with an initial control, solve the adjoint problem and evaluate the gradient. Then one updates the initial control in the direction of the approximate gradient with a step size $$\alpha_k$$. $$\alpha_k$$ is determined by trying to solve numerically the following :

$$\operatorname{argmin}_{\alpha_k}J(y_k,u_k-\alpha_k s_k)$$

where $$s_k$$ is choosen using the gradient of $$J$$

WARNING: I HAVE NOT SEEN IN THE CODE IN WHICH WAY DO YOU CHOOSE THE SEARCH DIRECTION. IT SEEMS THAT YOU TAKE THE GRADIENT RIGHT?????? USUALLY THERE ARE SEVERAL CHOICES DIFFERENT THAN THE GRADIENT DIRECTION. [WIKIPEDIA NON-LINEAR CONJUGATE GRADIENT]

This routine will tell to GradientMethod to stop when the minimum tolerance of the derivative (or the relative error, user's choice) is reached. Moreover there is a maximum of iterations allowed.

MANDATORY INPUTS:

NAME: iCP
DESCRIPTION: Control problem object, it carries all the information about the dynamics, the functional to be minimized and moreover the updates of the current best control find so far.
CLASS: ControlProblem

NAME: tol
DESCRIPTION: the tolerance desired.
CLASS: double

OPTIONAL INPUT PARAMETERS

NAME: InitialLengthStep
DESCRIPTION: This paramter is the length step of the gradient method that is going to be used at the begining of the process. By default, this is 0.1.
CLASS: double

NAME: MinLengthStep
DESCRIPTION: This paramter is the lower bound on the length step of the gradient method if the algorithm needs to have a step size lower than this size it will make the GradientMethod stop.
CLASS: double

OUTPUT PARAMETERS:

All the updates will be carried inside the iCP control problem object.

Name: Unew
Description: Update of the Control
class: a vector valued function in a form of a double matrix

Name: Ynew
Description: Update of State Vector
class: a vector valued function in a form of a double matrix

Name: Jnew
Description: New Value of functional
class: double

Name: dJnew
Description: New Value of gradient
class: a vector valued function in a form of a double matrix

Name: error
Description: Nthe error |dJ|/|U| or |dJ| depending on the choice of the user.
class: double

Name: stop
Description: if this parameter is true, the routine will tell to GradientMethod to stop
class: logical

Citations:

[1] Cohen, William C.,Optimal control theory—an introduction, Donald E. Kirk, Prentice Hall, Inc., New York (1971), 452 poges. $13.50,https://onlinelibrary.wiley.com/doi/abs/10.1002/aic.690170452

Improvement of stopping criteria

I think it would be aslo helpful to have the option to choose which stopping criteria does the user want.
It would be useful to have the option to unable the stopping criteria of maximum of iterations and being able to set a stoping criteria concerning the step-size of the gradient method, i.e. when the step-size given by the rule of dividing/multiplying is smaller than some value then set the current u as a minima.

I think it is also a natural way to cut the algorithm, with the maximum of iterations we do not actually know much information about the current point, maybe we have a big slope but we just arrived to the maximum number of iterations, while in the other framework we would know that when the algorithm breaks if we are not in a local minima we are close to it.

Index Contents

  • Main Description Toolbox
  • Main Features
    - No discretization in time -> Hight order in time. Usamos runge kutta en lugar de distretizacion de euler implicito/explicito (esto es como lo hace iPOpt)

ClassicalDescent,

I would change the name Classical, it seems that it has to work with this name, while it does not.
HE MIRADO EL LIBRO QUE ME MANDASTE PERO NO HE VISTO ESTO DE LA APROXIMACIÓN. NO VEO PORQUÉ NO HACEMOS BIEN EL ADJUNTO. AHORA MISMO NO TENGO MUCHO MAS TIEMPO PARA MIRAR EN DETALLE EL LIBRO

ClassicalDescent

This method is used within the GradientMethod method. GradientMethod executes iteratively this rutine in order to get one update of the control in each iteration. In the case of choosing ClassicalDescent this function updates the control of the following way:

$$u_{old}=u_{new}-\alpha dJ$$

where dJ is an approximation of the gradient of J that has been obtained considering the adjoint state of the optimality condition of Pontryagin principle. The optimal control problem is defined by
$$\min J=\min\Psi (t,Y(T))+\int^T_0 L(t,Y,U)dt$$

subject to:

$$\frac{d}{dt}Y=f(t,Y,U).$$

The gradient of $$J$$ is:

$$dJ=\partial_u H=\partial_uL+p\partial_uf$$

An approximation $$p$$ is computed using:

$$-\frac{d}{dt}p = f_Y (t,Y,U)p+L_Y(Y,U)$$

$$ p(T)=\psi_Y(Y(T))$$

Since one the expression of the gradient, we can start with an initial control, solve the adjoint problem and evaluate the gradient. Then we will update the initial control in the direction of the approximate gradient with a step size $$\alpha$$.
In this routine the user has to choose the step size.

WARNING: Using this routine the GradientMethod might not converge if the stepsize is not choosen properly or being slow if the step size is choosen very small. For an adaptative stepsize with Armijo Rule guaranteeing the convergence see (adaptative stepsize).

This routine will tell to GradientMethod to stop when the minimum tolerance of the derivative (or the relative error, user's choice) is reached. Moreover there is a maximum of iterations allowed.

MANDATORY INPUTS:

NAME: iCP
DESCRIPTION: Control problem object, it carries all the information about the dynamics, the functional to be minimized and moreover the updates of the current best control find so far.
CLASS: ControlProblem

NAME: tol
DESCRIPTION: the tolerance desired.
CLASS: double

OPTIONAL INPUT PARAMETERS

NAME: LengthStep
DESCRIPTION: This paramter is the length step of the gradient method that is going to be used. By default, this is 0.1.
CLASS: double

OUTPUT PARAMETERS

Name: Unew
Description: Update of the Control
class: a vector valued function in a form of a double matrix

Name: Ynew
Description: Update of State Vector
class: a vector valued function in a form of a double matrix

Name: Jnew
Description: New Value of functional
class: double

Name: dJnew
Description: New Value of gradient
class: a vector valued function in a form of a double matrix

Name: error
Description: Nthe error |dJ|/|U| or |dJ| depending on the choice of the user.
class: double

Name: stop
Description: if this parameter is true, the routine will tell to GradientMethod to stop
class: logical

Citations:

[1] Cohen, William C.,Optimal control theory—an introduction, Donald E. Kirk, Prentice Hall, Inc., New York (1971), 452 poges. $13.50,https://onlinelibrary.wiley.com/doi/abs/10.1002/aic.690170452

Minor bug: manual numeric ode schemes

For instance, it works:

function [tline,yline] = Euler(odefun,tspan,y0,options)
    tline = tspan;
    yline = zeros(length(tspan),length(y0));
    yline(1,:) = y0;
    
    for i=1:length(tspan)-1
        vector = odefun(tline(i),yline(i,:)')';
        yline(i+1,:) = yline(i,:) + vector*(tline(i+1)-tline(i));
    end        
end

However, in the odefun(),
odefun(tline(i),yline(i,:)')';
the need of double transpose is strange since it requires an ode scheme to produce row vectors of states variables.

Theoretical or Computational problem. A possible blow-up of the adjoint system

-I started a simulation with my code for a semilinear heat equation.
-I set a non-linearity that might have a blow-up for certain initial conditions.
-For such non-linarity there are theoretical results that say: even if there might be a blow up for a certain initial data, we can build a control that avoids the blow up.
-I set an initial data for which there is no blow-up before the integration time T=1 (it may exist afterwards).
-One can see that the free dynamics grows but it does not blow up.
-HOWEVER, THE CONTROLLED DYNAMICS BLOWS-UP when the target of the optimal control was 0.

  • MOREOVER THE COMPUTED CONTROL IS EQUAL TO ZERO.

Here there is the code that I used and the graphics plotted. (note that the controlled dynamics is not integrated over all [0,1])

The function to be executed is plots.m but it requires the other one as well as the whole DyCon toolbox

SLSD1doptimalnullcontrol.txt

plots.txt

controlled

free

control

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.