In other posts I have e.g. demonstrated how to implement linear programming in R and Python. I have also shared examples on e.g. gradient-descent optimization for non-linear problems.
In this post I want to provide an overview of various types of optimization problems. Firstly, I like to categorize optimization problems by their linearity and convexity. Below figure displays what I mean:
Linear problems are convex, since their objective function is linear and thus convex and all constraints are linear and thus convex.
Non-linear problems can be convex or non-convex. I further categorize non-convex optimization problems by the programs and algorithms that can be applied for solving them.
In addition to above categorization approach one must apply additional criteria when classifying optimization problems. These are displayed in below figure:
Continuous problems have continuous variables while discrete problems have a discrete solution space.
Unconstrained problems have a solution space which is unconstrained. Constrained problems have a constrained solution space.
Some problems seek to minimize or maximize a single objective. Other problems consider multiple objectives.
Some problems have deterministic variables while other problems have some uncertainty related to e.g. variables included by the problem (e.g. expected demand or sales, which could be a forecast moving within some range of estimates).