Gradient descent in R, for non-linear optimization (nloptr package)

This post introduces gradient descent optimization in R, using the nloptr package.

For solving transport problems or network modelling problems, linear programming will suffice.

Nevertheless, depending on the topic at hand, non-linear programming might become relevant when considering additional constraints or objectives that are non-linear.

R provides a package for solving non-linear problems: nloptr

In this post I will apply the nloptr package to solve below non-linear optimization problem, applying gradient descent methodology. Gradient descent algorithms look for the direction of steepest change, i.e. the direction of maximum or minimum first derivative. The logic is that if I keep moving into direction of steepest descent then I will quickly move towards my optimum.

The problem at hand:

Important note: When using nloptr to model non-linear problems one must state the problem as an minimization problem and all in-equality constraints must be of type <= 0 (less-than). The function can handle equality constraints, so you do not have to replace them by overlapping in-equality constraints. Variables are allowed to be box-constrained, and this can be considered by the nloptr solver. However, a starting point should be chosen.

I model and solve this problem with gradient descent, using the nloptr package in R (with the nloptr function!):

# load package
library(nloptr)

# define function to be optimized
eval_f <- function(x){
  return(
    list(
      "objective"=x[1]^2+x[2]^2,
      "gradient"=c(2*x[1],
                   2*x[2])
       )
  )
}

# define function representing in-equality constraints "<= 0"
eval_g_ineq <- function(x){
  return(
    list(
      "constraints"=x[1]+x[2]-100,
      "jacobian"=c(1,
                   1)
    )
  )
}

# define starting point
x_0 <- c(10,10)

# additional settings suggested by nloptr documentation file
# these settings e.g. define the exact optimization algorithm
local_opts <- list( "algorithm" = "NLOPT_LD_MMA",
                    "xtol_rel"  = 1.0e-7 )

opts <- list( "algorithm" = "NLOPT_LD_AUGLAG",
              "xtol_rel"  = 1.0e-7,
              "maxeval"   = 1000,
              "local_opts" = local_opts )

# model and solve problem
solution <- nloptr(x0=x_0,
                   eval_f=eval_f,
                   lb=NULL,
                   ub=NULL,
                   eval_g_ineq = eval_g_ineq,
                   eval_g_eq = NULL,
                   opts=opts)

Let us now review the optimization outcome:

print(solution)
## 
## Call:
## 
## nloptr(x0 = x_0, eval_f = eval_f, lb = NULL, ub = NULL, eval_g_ineq = eval_g_ineq, 
##     eval_g_eq = NULL, opts = opts)
## 
## 
## Minimization using NLopt version 2.4.2 
## 
## NLopt solver status: 3 ( NLOPT_FTOL_REACHED: Optimization stopped because 
## ftol_rel or ftol_abs (above) was reached. )
## 
## Number of Iterations....: 102 
## Termination conditions:  xtol_rel: 1e-07 maxeval: 1000 
## Number of inequality constraints:  1 
## Number of equality constraints:    0 
## Optimal value of objective function:  1.09462903864043e-94 
## Optimal value of controls: -7.398071e-48 -7.398071e-48

You can find more details on gradient descent optimization on Wikipedia: https://en.wikipedia.org/wiki/Gradient_descent

You May Also Like

Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.