Skip to content

Identify the language that leads to interpersonal conflict

Use case

We tested several Natural Language Processing models in the context of the design of a mobile application. The goal was to identify the parts of sentences containing various degrees of violence or approaches likely to trigger conflicts, and to propose appropriate advice.

We started with a dataset made up of several thousand sentences and annotated by mediators and trainers certified in Nonviolent Communication.

Each sentence is given a nonviolence score, with 2 classifications according to the advice to give.

We have designed a cleaning system that allows us to rewrite the sentences in a homogeneous language, replacing abbreviations for example. This text is more easily interpreted by the computer.

Then we implemented a strategy to artificially increase the number of rows in the dataset. Two strategies were used. The first one consists in replacing some words by their synonyms to enrich the vocabulary learned by the model, and the second one consists in automatically translating the sentence in another language, then re-translating it in French, which rewrites it differently for the same sense.

Then we tested 3 types of models:

  • A lightweight model, which consists in feeding each word to a neural network that has one input per word of the dictionary, as well as for some groups of adjacent words. Despite the large size of the input vector, this model is extremely fast. Fast enough to allow real-time inference on any mobile device, without the need for a server.
  • An intermediate model, which consists in splitting the sentence into subject/verb/complements and transforming each part into a meaning vector with a traditional tool called Word2Vec.
  • An advanced state-of-the-art model. By importing a large neural network, BERT, pre-trained by Google on billions of documents, and adapting it to our uses. This consists in keeping the part able to understand the language, to continue to train it on data close to our needs, then to add new layers of neurons specialized on the tasks we want to achieve. The resulting model is much heavier since the neural network alone weighs 1GB. However, once trained, it allows us to obtain results in a few milliseconds.

The advanced model allows to “score” the sentences with only 2% average deviation from the scores put by the experts. The light model shows an average deviation of 8% on the score, which is quite acceptable for a light embedded application.

These models are explainable. It is possible to know the degree of confidence that the network has in its own predictions. It is also possible to identify the role of each word in the prediction that is made.

Leave a Reply