# Epsilon constraint optimization in Python In previous posts, we have discussed using PuLP in python for the implementation of multi-objective linear optimization methods such as maximizing for one objective, then adding it as a constraint, and solving for the other objective (or applying a scalar weight-factor to that constraint) and combining objectives, using sampled weights and iterations with defined step-size.

In this post, I describe the popular augmented epsilon constraint ($\epsilon$-constraint or eps-constraint or abbreviated as AUGMECON) method. It is implemented in Python using PuLP to automatically generate the Pareto front (containing Pareto-optimal solutions) for multi-objective optimization problems (i.e., those with more than one objective).

AUGMECON is a two-stage approach to solve multi-objective optimization problems as follows:

1. Constructing a payoff table: In which, the conflict of objective functions with each other is seen, and a range for each is calculated. If objectives do not conflict, the model gets infeasible or unbounded, or the range of values for an objective function equals zero, the solution process can not be continued, and the model developer should check his/her model regarding such errors.
2. Generating Pareto-optimal solutions (or Non-dominated solutions): After successful achievement of the payoff table, a new optimization problem (based on the main one) is created and solved iteratively.

I consider the following educational example provided by Mavrotas (2009):

$max z 1 = x 1 max z 2 =3 x 1 +4 x 2 s.t. x 1 ≤20 x 2 ≤40 5 x 1 +4 x 2 ≤200 x 1 , x 2 ≥0$

Below, I describe the implementation procedure of AUGMECON in Python step-by-step:

## Step 1. Including Python packages (e.g. PuLP)

As usual, we should include packages that we use for coding purposes. Herein, I use PuLP for optimization of a single objective linear optimization problem, NumPy to store and retrieve data generated during the process, and sys to control the code execution. Finally, I use Matplotlib for plotting.

import PuLP as op
import NumPy as np
import sys
import Matplotlib.pyplot as plt

## Step 2. Define the problem using PuLP in Python

Next, I define the optimization problem under investigation using dictionaries, functions, and data types provided by PuLP for the definition of decision variables. Notably, at this stage, no model is defined for PuLP, and only its elements are coded.

# decision variables
x = {
0: op.LpVariable("x1",lowBound = 0, upBound = 20),
1: op.LpVariable("x2",lowBound = 0, upBound = 40),
}

# a function for calculating the value of each objective given the value of decision variables
def obj_val(k, a):
if k ==0:
return a
if k ==1:
return 3*a+4*a

# objective Functions
obj = {
0: obj_val(0, x),
1: obj_val(1, x),
}

# constraints
cons = {
0: 5*x+4*x <= 200,
}

## Step 3. Generate the payoff table using PuLP

Then, I generate a payoff table using lexicographic optimization (similar to this (Approach #1)), as follows:

# Generating payoff table using lexicographic optimization
payoff=np.zeros([len(obj),len(obj)]);
for k in range(0,len(obj)):
model = op.LpProblem("Max",op.LpMaximize)
model += obj[k]
for i in range(0,len(cons)):
model += cons[i]
result = model.solve()
if op.LpStatus[result] == 'Optimal':
print(str(op.LpStatus[result])+" ; max value = "+str(op.value(model.objective))+
" ; x1_opt = "+str(op.value(x))+
" ; x2_opt = "+str(op.value(x)))
payoff[k,k]= op.value(model.objective);
kp=k+1;
model = op.LpProblem("Max",op.LpMaximize)
while kp<= len(obj)-1:
model += obj[kp]
if kp-1>=0:
model += obj[kp-1] >= payoff[k,kp-1]
for i in range(0,len(cons)):
model += cons[i]
result = model.solve()
if op.LpStatus[result] == 'Optimal':
print(str(op.LpStatus[result])+" ; max value = "+str(op.value(model.objective))+
" ; x1_opt = "+str(op.value(x))+
" ; x2_opt = "+str(op.value(x)))
payoff[k,kp]= op.value(model.objective)
kp += 1
else:
sys.exit('no optimal solution for mod_payoff')
kp=0;
model += obj[k] >= payoff[k,k]
while kp< k:
model += obj[kp]
if kp-1>=0:
model += obj[kp-1] >= payoff[k,kp-1]
for i in range(0,len(cons)):
model += cons[i]
result = model.solve()
if op.LpStatus[result] == 'Optimal':
print(str(op.LpStatus[result])+" ; max value = "+str(op.value(model.objective))+
" ; x1_opt = "+str(op.value(x))+
" ; x2_opt = "+str(op.value(x)))
payoff[k,kp]= op.value(model.objective)
kp += 1
else:
sys.exit('no optimal solution for mod_payoff')

else:
sys.exit('no optimal solution for mod_payoff')

The above algorithm cycles through the payoff table as follows (considering predecessor objectives as a constraint):

## Step 4. Generate the Pareto-optimal solutions

Later, a range for each objective (per each column) is calculated:

minobj=np.zeros([len(obj),1]);
maxobj=np.zeros([len(obj),1]);
for k in range(0,len(obj)):
minobj[k] = min(payoff[:,k]);
maxobj[k] = max(payoff[:,k]);


Next, additional variables for the eps-constraint problem (known as slack variables) are defined. Notably, these variables are for other objectives except the first one.

# slack variables
s = {
1: op.LpVariable("s2",lowBound = 0),
}

Finally, the range of other objectives (here objective 2) is divided into intervals, and grid points are determined. Grid points control the epsilon values, which are RHS of objective functions existing in constraints.

intervals=4;
lst = np.empty([intervals+1,len(obj)]);
for g in range(0,intervals+1):
print('grid point no: ', g+1, 'val: ', maxobj - ((g)/intervals)*(maxobj- minobj))
model = op.LpProblem("Max",op.LpMaximize)
code = 1/(maxobj-minobj);
model += obj+1e-3*s*code
model += obj- s == maxobj - ((g)/intervals)*(maxobj- minobj)
for i in range(0,len(cons)):
model += cons[i]
result = model.solve()
if op.LpStatus[result] == 'Optimal':
d = [op.value(x), op.value(x)];
for k in range(0,len(obj)):
lst[g,k]=obj_val(k, d);
else:
print('early exit (jump)')
break

For understanding the effect of grid points, the following figure is helpful:

Starting from the best grid point (i.e., the point where objective two equals 184), RHS is decreased (maxobj – ((g)/intervals)*(maxobj- minobj)) using grid point iterator (g in the code) and range of objective (#2) which is divided into intervals. If the problem gets infeasible, the loop for objective two breaks (early exit strategy). Finally, the saved Pareto optimal solutions (i.e., lst[g,k]=obj_val(k, d) in the code) are plotted as follows:

plt.scatter(lst[:,0], lst[:,1])
# set axis lables
plt.xlabel("Z1")
plt.ylabel("Z2")
# set chart title
plt.title("Pareto-Optimal Solutions")
plt.show()

Most of the time, the decision-maker wants to select one of the Pareto optimal solutions. For this reason, each objective (here z1 and z2) is compared using methods such as the best-worst method (BWM) or ordinal priority approach (OPA). These methods are multi-criteria decision-making approaches to prioritize and weigh the importance of all criteria affecting the final decision. For instance, using BWM, if weights of objectives are derived as follows $w_1=0.4$ and $w_2 = 0.6$, then a Pareto optimal solution that increases the weighted sum of (normalized) objective values is selected. We will cover more topics in MCDM in the next posts.