Hannah Devinney

What does it mean to be “represented” in language data? How do we measure representation, and when does data tip into the kinds of “bias” that cause direct and indirect material harms to tech users and data subjects? How does ChatGPT work, anyway?

This seminar will present a “humanities-friendly” introduction to (data) representation in language technologies and its impacts, exploring how feminist and queer perspectives can be used to engage with questions of justice and fairness in technology. Together we will contextualize real examples of representational harms in tools such as chatbots, text generation and image captioning.

Hannah Devinney is postdoc in Gender Studies at Linköping University.

Hannah Devinney (liu.se)