

Medical educators frequently use simulations with standardised patients in their curriculum to expose learners to high-stakes scenarios in a safe, monitored context. It can be challenging to ensure a standardised experience for learners and provide consistent opportunities for faculty to measure competencies before piloting with target learners. We, therefore, designed a mixed-methods evaluation instrument based on our work conducting usability tests with health information technologies. We gathered quantitative data on task completion rates, competency assessment rates, and user perceptions of the task. We also gathered qualitative information on usability issues. Half of the testers did not complete the telehealth safety checks, and one tester did not complete an audio/visual cross-check. These issues interfered with the faculty assessment of three competencies: clinical data collection, proper equipment use, and meeting professional standards. We used testers’ qualitative feedback to identify easy improvements that we plan to test another round of testers. We believe the method illustrated here is an easily reproducible approach that clinician educators can adapt for various medical education simulations.