What challenges and opportunities can arise when humans use an artificial neural network to support decisions? Lars Holmberg posed this question in his dissertation ’Neural Networks in Context: Challenges and Opportunities’.

Artificial neural networks are inspired by the structure of the human brain but differ in that they cannot tell why they propose a certain decision. They lack the ability to, on their own, combine knowledge and experience from multiple fields, and they cannot formulate a causal explanation for a particular decision they have proposed.

In his dissertation at the Department of Computer Science and Media Technology, Lars Holmberg conducted design experiments on neural networks based on two different situations. In the first situation, the neural network performs in pair with human abilities.

It’s important that we keep machine explanations and human explanations separate, and that we know what the source of an explanation is.

Lars Holmberg

”In this case, the machine helps us with something we already can do, but it can, for example, do it faster. Let's say the task is to sort batteries for recycling; it will do it almost as well as a human. It is then useful since a human can assess the system's truthfulness," he says.

In the second situation, the neural network outperforms human cognitive abilities.

”Here, it’s no longer enough for the system to be useful – it also needs to be truthful, as the machine is now the expert.”

He explains this with a mushroom example.

”Let's say you use a neural network to recognise and distinguish different mushrooms species. If the system identifies a mushroom as a champignon, how do you know if you can trust it? Maybe there’s a poisonous mushroom very similar to a champignon? Humans can put the mushroom into a larger context – where it grows, what it smells like, when it was picked and by whom – whereas the machine lacks this ability.”

Humans reason both inductively (through experience) and deductively (through reasoning), unlike neural networks that lack the deductive component.

To increase truthfulness, Lars Holmberg proposes a model on how neural networks can develop a capacity to link independent concepts together and, thus, come closer to deductive reasoning. In machine learning, this can include recognising different shapes, colours, numbers and orientation as pieces of information.

”So, if you show an image of a coastline to a machine learning system, it would then not only recognise it by comparing it to other similar images, but also because it contains certain known elements the machine has learned. These are, for example, sea, beach and horizon, which can then be dissolved into even smaller elements. This thinking builds on previous research, such as studies along the lines of Been Kim, who is a researcher at Google Brain.”

Although AI systems will become more reliable in the future, Lars Holmberg believes we should still be aware of how the information is created and where it comes from.

”I think it’s important that we keep machine explanations and human explanations separate, and that we know what the source of an explanation is,” he says.

Text: Magnus Erlandsson

 

More about the research and the researcher

Read the entire dissertation: Neural Networks in Context: Challenges and Opportunities

Lars Holmberg is affiliated with the Internet of Things and People (IOTAP) research centre at Malmö University. You can read more about the research centre below.

Internet of Things and People (IOTAP)