We need some perspective when considering future technologies; we should be asking ourselves what we want to use technology for, not what we can use it for, says a Malmö University researcher.

In his dissertation, Lars Holmberg highlights the human role in the development of machine learning (ML); this is an area of artificial intelligence (AI) that teaches machines to perform certain tasks, for example, to suggest what a user buys, watches, or listens to.

“Giant companies like Google and Amazon collect data to find out what are the similarities between you and others, which they can then use to make suggestions. I want to turn the perspective around, and let individuals decide,” says Holmberg.

He is interested in how to put the learning ability of an ML system in the hands of people who want help solving a particular task. In his study, he has conducted three design experiments to further this ambition:

  • Supportive aids: for example, an ML system that you can teach your travel habits to; in this case, it could suggest the correct bus.
  • Sort and organise physical objects: the study focussed on a concept where a robot sorted batteries for recycling; the same idea could conceivable be used to identify and pick garden weeds.
  • Identify individual objects that are very similar to each other: an experiment focussed on hand-painted plates, but it could be used, for example, to distinguishing between individual birds.

It is about creating meaning and value for the individual without quantifying these in advance. For the large technology companies, people are, to a greater extent, given an instrumental value that can be measured in economic terms, and people's inherent value can then be overshadowed, believes Holmberg.

The problem with a few large corporations owning so much data about us is that people are lumped together. Holmberg describes it as creating a life bubble where you get suggestions on everything from what to eat, to choosing a partner. It becomes normative and streamlines our lives, he says. Minorities thus become less important and are even more marginalised.

“Recently, there’s been much more focus on ethical issues and representation when ML technology is developed. We see an increased interest in marginalised groups and ethical issues around the use of technology.”

The fact that it is becoming technically possible to create ML systems even with a smaller amount of data means that Holmberg believes that the total dominance of large companies can be supplemented with a new type of ML system.

One way is the perspective he is launching with "Human in Command Machine Learning", where it is possible to download ML systems where people without special knowledge of ML can then adapt them by adding the specific knowledge needed for the current area of use.

Text by Magnus Jando & Adrian Grist

More about the research and the researcher