Research・研究内容

Research

Below are my research projects at Tokyo Tech. For my previous research, please see my personal website.

Voice UX

Voice is a natural mode of communication. As an aural medium, voice is used to deliver information (speech) as well as express characteristics of the speaker (vocalics). Advances in sound production and machine learning alongside the proliferation of intelligent assistants and dictation interfaces have pushed voice user experiences (voice UX) to the forefront of modern human-computer interactions. We explore the paradigm of voice UX through systematic reviews, critical design studies, and controlled experiments.

音声は自然なコミュニケーション手段です。聴覚メディアである音声は、情報(内容)を伝えるだけでなく、話し手の特徴(パラ言語)を表現するために使用されます。音声生成と機械学習の進歩は、知的アシスタントやディクテーション・インターフェースの普及と並んで、音声ユーザー体験(音声UX)を現代の人間とコンピュータのインタラクションの最前線に押し上げました。私たちは、文献調査、クリティカル・デザイン調査、および人体実験を通して、音声UXの現象を探求しています。

Projects

Voice Against Bias

Can voice influence attitudes towards age? We aim to find out with MAKOTO, our “older adult” voice assistant.

Evaluating Voice UX

We’re exploring ways to measure and evaluate interactions with voice-based agents, interfaces, and environments.

Morphologies in Voice and Body

What kinds of bodies should voice-based agents have, if any? We explore a range of modalities and morphologies.

AI and Intersectional Design

Intersectional design examines how power operates through design practice with respect to overlapping factors of social identity, such as gender, age, and race. Designers, knowingly or not, draw on social models of how we look and sound, think and behave, and interact with the world. Users, too, respond in kind. We approach this phenomenon from an intersectional perspective, focusing on whether human diversity is reflected in design and research practice as well as how user reactions are shaped by biases embedded in the design of intelligent agents and interactive experiences.

交差的デザインは、ジェンダー、年齢、人種といった社会的アイデンティティの重複する要素に関して、デザイン実践を通してどのように権力が作用するかを検証します。デザイナーは、知ってか知らずか、私たちがどのように見え、聞こえ、考え、行動し、世界と相互作用するかという社会的モデルを引き出します。ユーザーもまた、それに呼応するのです。私たちは、人間の多様性がデザインやリサーチの実践に反映されているか、また、ユーザーの反応が知的エージェントやインタラクティブ体験のデザインに組み込まれたバイアスによってどのように形成されているかに注目し、交差的な観点からこの現象にアプローチしています。

Projects

Gender Neutrality in Robots

We’re exploring whether and how robots can be perceived as gender-neutral.

  • Members:
    Katie Seaborn
    Julia Keckeis
    Takao Fujii
  • Publications:
  • Funding: Engineering Academy Young Scientist Encouragement Award
  • Timeline: FY22~
Social Identity in Robots

We’re exploring how social identity affects human-robot interactions.

  • Members:
    Katie Seaborn
    Haruki Kotani
    Takao Fujii
  • Publications:
  • Funding: Engineering Academy Young Scientist Encouragement Award
  • Timeline: FY22~
Biases and Intersectionality

We’re approaching biases within and around us from an intersectional lens.

  • Members:
    Katie Seaborn
    Yeongdae Kim
  • Publications:
  • Funding: Engineering Academy Young Scientist Encouragement Award
  • Timeline: FY22~

Interactions in the Negaverse

Are we living in a negaverse? Critical scholarship has drawn attention to a range of ways in which technology exploits, affords, or even celebrates negative experiences. Dark patterns and persuasive interfaces, misinformation and fake news, maldaimonic UX and dark participation … even gamification can have an adverse effects, whether intentionally or not. We explore how negative user experiences and orientations play out across a range of interactive systems, as well as how they can be disrupted.

私たちはネガバースに生きているのだろうか?批評的な研究は、テクノロジーがネガティブな経験を利用したり、与えたり、あるいは称賛したりするさまざまな方法に注意を向けてきた。ダークパターンや説得力のあるインターフェイス、誤報やフェイクニュース、maldaimonic UXやダークな参加…ゲーミフィケーションでさえ、意図的かどうかにかかわらず、悪影響を与えることがある。私たちは、様々なインタラクティブ・システムにおいて、ネガティブなユーザー体験や方向性がどのように展開されるのか、また、それらをどのように破壊することができるのかを探求しています。

Projects

ELEMI: Exoskeleton for the Mind

Exploring whether and how a metacognitive agent can help us grapple with misinformation on social media.

Trust in AI

What factors affect trust and reliance in AI-based agents, systems, and environments? Exploring layperson and expert perspectives.

Deceptive Design and Culture

Exploring dark patterns, deceptive interactions, and persuasive interfaces in Japan and elsewhere.

  • Members:
    Katie Seaborn
    Shun Hidaka
    Sota Kobuki
    Mizuki Watanabe
  • Publications:
  • Timeline: FY21~
Hidaka, S., Kobuki, S., Watanabe, M., & Seaborn, K. (2023). Linguistic dead-ends and alphabet soup: Finding dark patterns in Japanese apps. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3544548.3580942 Cite
Seaborn, K., Nam, S., Keckeis, J., & Itagaki, T. (2023). Can voice assistants sound cute? Towards a model of kawaii vocalics. Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 1–7. https://doi.org/10.1145/3544549.3585656 Cite
Seaborn, K. (2023). Interacting with masculinities: A scoping review. Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 1–12. https://doi.org/10.1145/3544549.3585770 Cite
Seaborn, K., & Kim, Y. (2023). “I’m” lost in translation: Pronoun missteps in crowdsourced data sets. Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 1–6. https://doi.org/10.1145/3544549.3585667 Cite
Seaborn, K., Chandra, S., & Fabre, T. (2023). Transcending the “male code”: Implicit masculine biases in NLP contexts. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–19. https://doi.org/10.1145/3544548.3581017 Cite
Ku, B., Itagaki, T., & Seaborn, K. (2023). Dis/immersion in mindfulness meditation with a wandering voice assistant. Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 1–6. https://doi.org/10.1145/3544549.3585627 Cite
Ueno, T., Kim, Y., Oura, H., & Seaborn, K. (2023). Trust and reliance in consensus-based explanations from an anti-misinformation agent. Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 1–7. https://doi.org/10.1145/3544549.3585713 Cite
Seaborn, K., Barbareschi, G., & Chandra, S. (2023). Not only WEIRD but “uncanny”? A systematic review of diversity in human-robot interaction research. International Journal of Social Robotics. https://doi.org/10.1007/s12369-023-00968-4 Cite
Seaborn, K., Sekiguchi, T., Tokunaga, S., Miyake, N. P., & Otake-Matsuura, M. (2023). Voice over body? Older adults’ reactions to robot and voice assistant facilitators of group conversation. International Journal of Social Robotics, 15(2), 143–163. https://doi.org/10.1007/s12369-022-00925-7 Cite
Kim, Y., Ueno, T., Seaborn, K., Oura, H., Urakami, J., & Sawa, Y. (2023). Exoskeleton for the mind: Exploring strategies against misinformation with a metacognitive agent. Proceedings of the 2023 ACM International Conference on Augmented Humans (AHs). AHs ’23, Glasgow, Scotland, UK. https://doi.org/10.1145/3582700.3582725 Cite
Seaborn, K., Miyake, N. P., Pennefather, P., & Otake-Matsuura, M. (2022). Voice in human–agent interaction: A survey. ACM Computing Surveys, 54(4), 1–43. https://doi.org/10.1145/3386867 Cite
Sawa, Y., & Seaborn, K. (2022). Localizing the Ambivalent Ageism Scale for Japan. The 8th Asian Conference on Aging & Gerontology 2022: Official Conference Proceedings, 33–36. https://doi.org/10.22492/issn.2432-4183.2022.4 Cite
Seaborn, K., & Pennefather, P. (2022). Neither “hear” nor “their”: Interrogating gender neutrality in robots. Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, 1030–1034. https://doi.org/10.5555/3523760.3523929 Cite
Seaborn, K., & Pennefather, P. (2022). Gender neutrality in robots: An open living review framework. Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, 634–638. https://doi.org/10.5555/3523760.3523845 Cite
Seaborn, K., & Frank, A. (2022). What pronouns for Pepper? A critical review of gender/ing in research. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3491102.3501996 Cite
Seaborn, K., Pennefather, P., & Kotani, H. (2022). Exploring gender-expansive categorization options for robots. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, 1–6. https://doi.org/10.1145/3491101.3519646 Cite
Urakami, J., & Seaborn, K. (2022). Nonverbal cues in human-robot interaction: A communication studies perspective. ACM Transactions on Human-Robot Interaction (THRI). Cite
Seaborn, K. (2022). From identified to self-identifying: Social Identity Theory for socially embodied artificial agents. Proceedings of the HRI 2022 Workshop on Robo-Identity 2. HRI Workshop on Robo-Identity ’22, Sapporo, Hokkaido, Japan. https://sites.google.com/view/robo-identity2 Cite
Ueno, T., Sawa, Y., Kim, Y., Urakami, J., Oura, H., & Seaborn, K. (2022). Trust in human-AI interaction: Scoping out models, measures, and methods. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, 1–7. https://doi.org/10.1145/3491101.3519772 Cite
Kobuki, S., Seaborn, K., Tokunaga, S., Fukumori, K., Hidaka, S., Tamura, K., Inoue, K., Kawahara, T., & Otake-Matsuura, M. (2022). Robots using “aizuchi” in online group conversation. Proceedings of the 40th Meeting of the Robotics Society of Japan (RSJ 2022). RJS 2022, Tokyo, Japan. https://ac.rsj-web.org/2022/ Cite
Urakami, J., Kim, Y., Oura, H., & Seaborn, K. (2022). Finding Strategies against Misinformation in Social Media: A Qualitative Study. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, 1–7. https://doi.org/10.1145/3491101.3519661 Cite
Seaborn, K., Urakami, J., & Oura, H. (2021, August 8). Bots Against Bias (BoAB): A seminar on designing robots that enhance human metacognition [Workshop]. 2021 IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2021), Vancouver, BC, Canada. https://aspirelab.io/boab2021/ Cite
Seaborn, K., Pennefather, P., Miyake, N., & Otake-Matsuura, M. (2021). Crossing the Tepper line: An emerging ontology for describing the dynamic sociality of embodied AI. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 1–6. https://doi.org/10.1145/3411763.3451783 Cite
Seaborn, K., & Urakami, J. (2021). Measuring voice UX quantitatively: A rapid review. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 1–8. https://doi.org/10.1145/3411763.3451712 Cite
Seaborn, K. (2021). Removing gamification: A research agenda. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 1–7. https://doi.org/10.1145/3411763.3451695 Cite
Voice Against Bias

Independent

Voice Against Bias

We will explore the use of "elder" voice assistants as a novel method of reducing negative cognitive biases like implicit ageism. Cognitive biases are natural functions of the human mind that are influenced by the external world in positive and negative ways. Through co-design methodologies, intergenerational user studies, and long-term "in the wild" evaluations, we will examine whether voice assistants with older adult voices can shift biases in prosocial directions.

暗黙のエイジズムのような否定的な認知バイアスを軽減するための新しい方法として、「年長者」の音声アシスタントの使用の研究。認知バイアスとは、人間の心が持つ自然な機能であり、外界の影響を受けてポジティブにもネガティブにも変化するもの。共同デザイン手法、世代間のユーザー研究、および長期的な「イン・ザ・ワイルド」評価を通じて、高齢者の声を持つ音声アシスタントが偏見を向社会的な方向にシフトできるかどうかを検証する。

April 2021 to March 2024

Funding: Japan Society for the Promotion of Science (JSPS)

Collaboration

Project Elemi

Our goal is to create and study Elemi, an “exoskeleton for the mind.” Elemi will be an AI-based intelligent support system designed to augment metacognition in everyday situations. Built with and for the public, it aims to help people with a range of everyday challenges in the information age.

本プロジェクトの目標は,心の外骨格「エレミ」を開発し研究することである。エレミは,日常的な状況におけるメタ認知を増強するためのAIベースの知的支援システムである。情報社会における日常の課題について人々を支援することがねらいである。

September 2020+

Prof. Jacqueline Urakami (Tokyo Tech)
Prof. Hiroki Oura (Tokyo Tech)
Dr. Yeongdae Kim (Project Researcher)

Funding: DLab Challenge Grant 2020

Project Elemi