This event has passed

 

 

Welcome to Lars Holmberg's licentiate seminar!

The seminar will be streamed live on this webpage.

Lars Holmberg, doctoral student at the Department of Computer Science and Media Technology, defends his thesis “Human In Command Machine Learning”.

Questions to the respondent is sent to paul.davidsson@mau.se

Moderator

Professor Niklas Lavesson, Jönköping University

Examiner

Associate professor Romina Spalazzese, Malmö University

Chair at the seminar and principal supervisor

Professor Paul Davidsson, Malmö University

Supervisor

Dr. Per Linde, Malmö University

 

Abstract: Human In Command Machine Learning

Machine Learning (ML) and Artificial Intelligence (AI) impact many aspects of human life, from recommending a significant other to assist the search for extraterrestrial life. The area develops rapidly and exiting unexplored design spaces are constantly laid bare. The focus in this work is one of these areas; ML systems where decisions concerning ML model training, usage and selection of target domain lay in the hands of domain experts.

This work is then on ML systems that function as a tool that augments and/or enhance human capabilities. The approach presented is denoted Human In Command ML (HIC-ML) systems. To enquire into this research domain design experiments of varying fidelity were used. Two of these experiments focus on augmenting human capabilities and targets the domains commuting and sorting batteries. One experiment focuses on enhancing human capabilities by identifying similar hand-painted plates. The experiments are used as illustrative examples to explore settings where domain experts potentially can: independently train an ML model and in an iterative fashion, interact with it and interpret and understand its decisions.

HIC-ML should be seen as a governance principle that focuses on adding value and meaning to users. In this work, concrete application areas are presented and discussed. To open up for designing ML-based products for the area an abstract model for HIC-ML is constructed and design guidelines are proposed. In addition, terminology and abstractions useful when designing for explicability are presented by imposing structure and rigidity derived from scientific explanations. Together, this opens up for a contextual shift in ML and makes new application areas probable, areas that naturally couples the usage of AI technology to human virtues and potentially, as a consequence, can result in a democratisation of the usage and knowledge concerning this powerful technology.