Skip to content

Reliable A.I.

Technology

Trusted A.I. covers a set of technologies that allow us to better understand and evaluate the behavior of an A.I.

This is an important research theme within Logiroad.ai. We have supervised several theses on this subject.

We are working for example on the evaluation of the uncertainty in the output of a neural network. This allows us to make better decisions on the actions to take following a prediction of the network.

To do this, one of the techniques deployed is to measure the values of the weights and signals inside the artificial neurons. This allows us to know if the network makes strong predictions, where everything converges towards the result, or weak predictions, where the signals are more scattered.

We can also find out which areas of the images led to which decision. This is to verify that the neural network has made the right decisions for the right reasons.

It is also possible to add, on top of the neural network, a highly interpretable model that tries to reproduce the same behavior as the network and is able to detail the reasons that led it to make such or such choice.

Another technique is to make predictions with the same input by turning off a part of the network each time. This technique allows to obtain a statistic on the uncertainty and to better understand the data out of the domain.

Leave a Reply