As is well known, black box algorithms are those in which we know both the input data received by the model and the response it produces. However, as a general rule, these algorithms do not allow us to know how the answer is arrived at.
Many techniques based on artificial intelligence are considered black-box, as they are able to make decisions based on advanced data analysis. However, this does not allow us to have a traceability of the decision taken, as is the case, for example, with machine learning models.
Predictive techniques such as machine learning are extremely useful for creating future probabilities based on past data, and can provide a great deal of business value in certain areas and business sectors. But what happens when we are in heavily regulated sectors, such as banking or insurance, or when a decision can directly affect people’s lives?
In such cases, it seems reasonable to consider that the best option would be to make decisions using technologies that allow the user to know why a decision has been made.
As we can see in the following graph, the options with a greater predictive capacity generally have a lower interpretability than other types of alternatives, such as decision trees or inference engines, normally used by BRMS for decision making based on business rules.
However, what is really interesting is that these options can be combined in such a way that we can obtain the benefits that both offer: the predictive capability of machine learning models, and the visibility and transparency of business rules management systems (BRMS). In this way, business rules could act as the regulatory entity for machine learning models.
Suppose you want to create a risk scoring model that assesses the future ability to pay of each customer applying for credit and, based on this scoring, whether or not to grant credit. One option would be to use only machine learning to create this risk score and make the decision based on it. In this way, the model itself would make the decision to accept or deny the loan.
Now let’s imagine that one of the customers whose credit has been denied wants to know the reason why it has not been granted, arguing that he has more than enough solvency to pay for it, or simply that he wants to know what he can do to improve his credit score. Using machine learning alone, we will not know the reason why the model has refused the loan, we will only know that its score is a specific score according to the data it has provided us with. It is clear that in this type of case it is necessary to have a mechanism that allows us to answer these questions.
On the other hand, if instead of using machine learning alone we combine it with a BRMS, we could use machine learning to give an initial risk score for each customer and use this information as one more piece of data to be taken into account in a BRMS decision service.
In other words, if the rules of a BRMS decision service supervise machine learning, we can have the traceability of the decisions taken at all times, in addition to guaranteeing compliance with current regulations and the internal guidelines of companies or international entities.
In this way, we would have the result of the predictive analysis at our disposal (being able to improve our decision-making), but it would not be a black box algorithm that would determine whether or not to grant credit, but rather the decision-making model would remain under human control.
If you would like to learn more about how we can help you improve your predictive models, contact us. Do you want to learn more about decide4AI and keep up to date with future webinars or actions? Follow us on social networks (Linkedin, Twitter, Youtube).