Thank you for sharing this paper. As I mentioned in my other comment I believe this is effective and a good approach. But the discussed approach #1 in this post, adding one of the objectives as a constraint, can in my opinion also be used for playing with trade-offs – by solving the problem over and over again at different constraint values.

What I am not sure of is which approachs is fastest and most effective in terms of providing insights – for your specific problem and for all linear problems in general. I am not sure about whether one of the approaches might be better in some situations but “worse” (i.e. inferior) in other situations.

]]>Hi Anjani.

Thanks for the questions and discussion. A couple of thoughts on your comments:

1) The first approach can in my opinion be used to play with trade-offs. Because you can set the contraints values differently, depending on how much you are willing to “sacrifice” the given objective. It off course means you would have to solve the problem many times, with different constraint values for the objective added as a constraint. It is a bit of work and a little messy, but I still think it can solve your issue. The idea is essentially the same as the one you cite from section 2.1.1 – which I comment below in 2).

2) The idea of “normalizing” the objective function by dividing it through min and max essentially means that when you weigh them with a coefficient alpha in the range 0% to 100% you are assigning priorities to the unconstrainted optimum of each single objective. It is an effective way of normalizing in this case and will allow you to investigate the sensitivity of both objectives with regards to the solution space.

I am not sure whether the approach discussed in my above comment 1) will give you better insights at less effort than 2), or vice versa. It would be interesting to hear about your findings. I will try to make an example that addresses your question when I find the time to do so.

]]>Very thanks for the reply.

But i need weights to play with trade off between both objectives which is not their in first method.

I was going through a paper. Kindly see section 2.1.1. It talks about taking (max-min) value of each objective to divide each objective and then use weights.

Do you think this may help in scaling objectives in order to say when alpha = 0.5 and (1-alpha) = 0.5 then its equal weightage for both the objectives ?

]]>Hello Anjani.

Thank you for your question.

In this post I demonstrated two possible approaches:

1) adding the first objective as a constraint and optimize the second objective, with the constraint of achieving the first objective to a specified extend

2) adding objectives, resulting in a joint weighted objective function

In this case I would use the first approach. That is, solve e.g. the problem of minimizing the costs while adding the constraint of sourcing some minimum amount bz the respective supplier. Depending on the complexity you will have to set up many sub-problems, with different lower values for your “sourcing” constraints.

And with regards to maximization vs. minimization: If you minimize costs, it is the same as maximizing “negative costs”. For example instead of solving min x1+x2, solve max -x1-x2 instead.

]]>here it is …

Both objectives have scaling issues …

1st Obj – Sum of quantities supplied by supplier (Maximization obj)

2nd Obj – quantities provided by Supplier * Cost quoted by them (Min Obj)

here second objective has cost and that is causing scaling issue and overall objective becomes negative (Maximize ) and hence overall becomes minimization problem

thus i need to use probability for first objective >0.95 in order to get output else output is coming as zero

Is there a simplified way to handle scaling of blended objective (multi objective problems)

]]>Dear Kishalay. Excellent question. It is possible to do this, yes! Referring to your example, write the function like this:

def f1(x1,x2):

return 2*x1 + 3*x2

and then write

linearProblem += f1(x1,x2)

]]>You are welcome 🙂

]]>