TY - GEN
T1 - Intentionally Biasing User Representation?
T2 - 12th Nordic Conference on Human-Computer Interaction: Participative Computing for Sustainable Futures, NordiCHI 2022
AU - Salminen, Joni
AU - Jung, Soon Gyo
AU - Jansen, Bernard
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/10/8
Y1 - 2022/10/8
N2 - Algorithmically generated personas can help organizations understand their social media audiences. However, when using algorithms to create personas from social media user data, the resulting personas may contain toxic quotes that negatively affect content creators' perceptions of the personas. To address this issue, we have implemented toxicity detection in an algorithmic persona generation system capable of using tens of millions of social media interactions and user comments for persona creation. On the system's user interface, we provide a feature for content creators using the personas to turn on or off toxic quotes, depending on their preferences. To investigate the feasibility of this feature, we conducted a study with 50 professionals in the online publishing domain. The results show varied reactions, including hate-filter critics, hate-filter advocates, and those in between. Although personal preferences play a role, the usefulness of toxicity filtering appears primarily driven by the work task - specifically the type and topic of stories the content creator seeks to create. We identify six use cases where a toxicity filter is beneficial. For system development, the results imply that it is beneficial to give content creators the option to view or not view toxic comments, rather than making this decision in their stead. We also discuss the ethical implications of removing toxic quotes in algorithmically generated personas, including potentially biasing the user representation.
AB - Algorithmically generated personas can help organizations understand their social media audiences. However, when using algorithms to create personas from social media user data, the resulting personas may contain toxic quotes that negatively affect content creators' perceptions of the personas. To address this issue, we have implemented toxicity detection in an algorithmic persona generation system capable of using tens of millions of social media interactions and user comments for persona creation. On the system's user interface, we provide a feature for content creators using the personas to turn on or off toxic quotes, depending on their preferences. To investigate the feasibility of this feature, we conducted a study with 50 professionals in the online publishing domain. The results show varied reactions, including hate-filter critics, hate-filter advocates, and those in between. Although personal preferences play a role, the usefulness of toxicity filtering appears primarily driven by the work task - specifically the type and topic of stories the content creator seeks to create. We identify six use cases where a toxicity filter is beneficial. For system development, the results imply that it is beneficial to give content creators the option to view or not view toxic comments, rather than making this decision in their stead. We also discuss the ethical implications of removing toxic quotes in algorithmically generated personas, including potentially biasing the user representation.
UR - http://www.scopus.com/inward/record.url?scp=85140925318&partnerID=8YFLogxK
U2 - 10.1145/3546155.3546647
DO - 10.1145/3546155.3546647
M3 - Conference contribution
AN - SCOPUS:85140925318
T3 - ACM International Conference Proceeding Series
BT - Participative Computing for Sustainable Futures - Proceedings of the 12th Nordic Conference on Human-Computer Interaction, NordiCHI 2022
PB - Association for Computing Machinery
Y2 - 8 October 2022 through 12 October 2022
ER -