Designing better music listening experiences through transparent AI
Every day, different content is recommended to us through algorithms in digital platforms. Even though many of us are aware of this, understanding how they work and how the algorithmic experience is shaped are often not entirely transparent to users.
Malmö University researcher Maliheh Ghajargar believes that a better user experience (UX) design for recommender systems can make algorithmic recommendations more transparent and graspable to the public.
“Nowadays, the problem is that AI and machine learning algorithms are so pervasive and blended in everyday life, that we often forget about them. That is by design, of course, but it also can be a design problem,” Ghajargar says.
In the article Unboxing the Algorithm: Designing an Understandable Algorithmic Experience in Music Recommender Systems, Ghajargar and master’s student Anna Marie Schröder focused on understandability as a design principle in music recommender systems, and discussed possible future developments of the work.
When we use apps on our phones, scroll through social media, stream videos or music, there are algorithms at play making calculated decisions as to what content the service will suggest to us next: the recommendations.
However, understanding how our interactions with AIs and algorithms shape what gets recommended could lead not only to a more positive and transparent outlook on technology but also to a more empowered user, believes Ghajargar.
“Although users of such systems would appreciate a certain level of unpredictability, to explore different and unexpected music, transparency remains a value and is very relevant in higher stake scenarios such as AI assisted recruitment systems, “ Ghajargar adds.
“The AI is getting so pervasive, with recommender systems for example, they’re always working in the background, and we use them every day in music and video streaming apps, in e-commerce, in social media, and so on. They in turn use our data to shape our experiences online. But these systems are also unsupervised algorithms, they are often exclusively shaped by user behaviour and rarely have experts looking over them,” says Ghajargar.
What she is looking to do with her research is to make people aware of how the data that users give the algorithms, affects what they in turn get recommended.
“In this research, we found that users rarely know that, for example, liking or skipping a song actually matters. Those actions are important in shaping the algorithm and the recommendations offered by that,” she says.
To make the end user more aware of what the algorithms are doing in the background, Ghajargar works with three key-characteristics: Explainable AI – how does the AI work – graspability – explaining how it works should be easy and intuitive – and transparency – the way AI operates with data should be made clearer.
Understanding human-AI relationships have further implications than shaping good experiences around content. Especially since AI on social media platforms are problematic in terms of bias and misinformation.
“Since our behaviours shape AI and AI can shape our behaviour, we probably need to create a reflective symbiotic relationship based on awareness. That might help us learn how AI and we as humans think,” says Ghajargar.
Text: Max Pahmp