The RUSH Checklist

Consensus-Derived

Developed using academic-standard Delphi study methods

Learn more →

Community-Vetted

Peer-reviewed and endorsed by experts from around the world

Our publication →

Open Science

Part of the open science movement for transparent and reproducible research

Get the checklist →

Motivation

Despite the rapid growth of human robot interaction (HRI) research, reporting practices remain inconsistent and often incomplete. Recent reviews show that even papers published in leading venues frequently omit essential information. Missing details about participant demographics, recruitment procedures, compensation, statistical analyses, methodological choices, and evaluation metrics make it difficult to assess study quality, compare results across studies, or reproduce findings reliably. These gaps undermine cumulative knowledge building and slow the translation of research insights into real world applications. Addressing these reporting shortcomings is therefore critical for strengthening scientific rigor, transparency, and impact in the HRI community.

What is the RUSH checklist?

The RUSH is a validated, consensus-based tool for reporting HRI user studies, informed by 34 HRI experts from around the world. The checklist addresses both essential and context-dependent elements across nine domains: general information, introduction and background, participants, study design, ethics, data collection, results, discussion, and code and data release. The 106 reporting items ensure a thorough and standardized reporting approach.

How was the checklist developed?

We employed the Delphi study method, a structured, multi-phase process for collecting anonymous feedback and building consensus iteratively, reducing the influence of dominant voices and enabling a balanced perspective across geographic and disciplinary lines. The Delphi process generated strong expert consensus on 106 reporting items across nine domains, forming the RUSH checklist.

Why use the checklist?

We aim for the RUSH checklist to act as a supporting framework, bringing rigor, transparency, and reproducibility in reporting HRI user studies. While developed and validated specifically for HRI user studies, the checklist can also support planning, execution/tracking, and reporting for work that includes a user evaluation with human participants, in both controlled (e.g., lab settings) and uncontrolled (e.g., public settings) studies.

Acknowledgements

This work was funded in part by the World Research Hub (WRH) Program of the International Research Frontiers Initiative (IRFI) at Institute of Science Tokyo (previously Tokyo Institute of Technology), as well as the U.S. National Science Foundation (NSF) under projects 2529206 and 2238088.

A Global Initiative

The Checklist Team

An international group of HRI researchers passionate about standards worked together to improve transparency and standardization in the reporting of our work

Shruti Chandra

Shruti Chandra

Assistant Professor

University of Northern British Columbia, Canada

Katie Seaborn

Katie Seaborn

Associate Professor

University of Cambridge, UK / Institute of Science Tokyo, Japan

Giulia Barabreschi

Giulia Barbareschi

Professor

University of Dusiburg Essen, Germany

Wing-Yue Geoffrey Louie

Wing-Yue Geoffrey Louie

Assistant Professor

Oakland University

Shelly Bagchi

Shelly Bagchi

Electrical Engineer

National Institute of Standards and Technology (NIST), US

Sara Cooper

Sara Cooper

Researcher

Artificial Intelligence Research Institute (IIIA-CSIC), Spain

Zhao Han

Zhao Han

Assistant Professor

University of South Florida, US

Daniel Tozadore

Daniel Tozadore

Lecturer

University College London, UK

Publication

Our foundational work on the RUSH checklist was published to the premier HRI ’26 conference proceedings:

Chandra, S., Seaborn, K., Barbareschi, G., Louie, W-Y. G., Bagchi, S., Cooper, S., Han, Z., and Tozadore, D. (2026). The RUSH Checklist: A Standardized Framework for Reporting User Studies in Human-Robot Interaction. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI ’26).

https://doi.org/10.1145/3757279.3785572

As an open science tool, the checklist is free to access and use, forever. Available on Google Sheets. Feel free to share the short link: http://bit.ly/rushchecklist

How should I use the RUSH checklist?

The checklist can be used a few different ways.

Like the PRISMA, the RUSH can be used as a literal checklist. You can download a copy and go through each item to ensure that all necessary items are covered in your report. You can write the page number/s or describe where each item can be found in the “Where Reported” column. When submitting your report for publication, you can include the checklist as a supplementary file or generate a table/figure to place within the paper itself.

Future work: LaTeX/Overleaf template

We also advocate for incorporating the checklist as the structure of the reporting templates. For example, headings can align with the items in the checklist. (Ideally, publication bodies will adopt the RUSH into formal templates, in the near future …)

The RUSH checklist can also be used before a project begins to ensure that you and your team have covered all items before conducting the research. It can be used to develop and help fill in protocols preregistered before you carry out the research.

We also suspect that the RUSH checklist would be excellent as a teaching aid in tutorials, workshops, lectures, and courses on HRI user studies.

Future work:
Integration into official templates

Updates

We’ll keep track of updates to the RUSH project here.

Contact us

If you have any comments, questions, or suggestions, feel free to get in contact with Shruti Chandra via the Project RUSH email:

rush |at| aspirelab |dot| io