In general, studying the properties of linear systems and their solution is much simpler than for non-linear systems. Throughout history, linearization of functions around certain values is a highly used technique to analyse the performance of non-linear systems. Linearization consists thus of finding a linear function that can approximate a given function around a point.

The first step to resolve an **optimization problem** is to model reality with mathematical language, that is, rewrite it with variables and their relations. Often, reality leads to non-linear relations between the variables defining it. **In mathematical programming, the concept of linearization consists of approximating a given function using a linear function in an interval** to be able to apply techniques and optimization solutions used for linear problems. For anyone not familiar with this concept, a linear function is a first degree polynomial function, in other words, something like:

This article aims to describe some of the most frequent cases of linearization used in the modelling of optimization problems.

**Continuous Non-Linear Function:**A continuous non-linear function can be approached with linear segments so that the result gives a set of linear restrictions to be included in the model.

The formulation of this linearization can be found in the blog article “How to Formulate a Piecewise Linear Function”.

**Discontinuous Functions:**Likewise, a non-continuous function can be modelled with finite jump discontinuities with a discontinuous piecewise function. The way this linearization is formulated can be found in the blog article “Discounts and penalties. Non-continuous PWL”.

- Discontinuous Ranges of Decision Variables: Many times, the range of a decision variable x is defined with discontinuous intervals such as a≤ x ≤ b or c≤ x ≤d. To represent this situation, a σ binary variable is introduced so that:

**Maximum and minimum:**Modelling of the condition [math]max_{i in I} (x_{i})[/math] is represented with the following requirements:

1. The variable y must be greater than any of the variables x, that is,

2. There is at least one i ϵ I so that [math]y=x_{i}[/math]

is [math]sigma_{i}[/math] binary variable so that [math]sigma_{i}=1 hspace{0.3cm}sihspace{0.3cm}yleq x_{i}[/math], and 0 otherwise, with i ϵ I . The following condition assures condition 2 is met.

You would now just have to model the definition of σ adding the set of restrictions

Being [math]M_{i}[/math] higher than [math]y – x_{i}[/math]

If the variable y were in the target function in a minimization problem, we could consider eliminating the restrictions of point 2. In practice, this simplification can only be carried out if you are certain you are always going to reach the optimal point, otherwise the condition cannot be met.

Modelling of minimum would be done in a similar way.

**Product of variables:**The product of two variables can be linearized if one of the two is bounded. The most frequent case is the product of two binary variables δ = α∙β which can be modelled as follows:

There are more cases of linearization and modelling than those described in this article. Trying to write them all down would be absurd, if not impossible. For anyone wanting to learn more about modelling techniques, the book “Model Building in Mathematical Programming” by H. Paul Williams is very interesting.