객관구조화진료시험(OSCE)에서 교수와 표준화환자 사이의 점검표 채점의 일치도 |
박훈기1, 이정권2, 황환식1, 이재웅3, 최윤영4, 김 혁5, 안동현6 |
1한양대학교 의과대학 가정의학과 2한양대학교 의과대학 내과 3한양대학교 의과대학 핵의학과 4한양대학교 의과대학 흉부외과 5한양대학교 의과대학 신경정신과 6성균관대학교 의과대학 가정의학과 |
The Agreement of Checklist Recordings Between Faculties and Standardized Patients in an Objective Structured Clinical Examination (OSCE) |
Hoonki Park1, Jungkwon Lee2, Hwansik Hwang1, Jaeung Lee3, Yunyoung Choi4, Hyuck Kim5, Dong Hyun Ahn6 |
1Department of Family Medicine, Hanyang University College of Medicine, Korea. 2Department of Internal Medicine, Hanyang University College of Medicine, Korea. 3Department of Nuclear Medicine, Hanyang University College of Medicine, Korea. 4Department of Thoracic Surgery, Hanyang University College of Medicine, Korea. 5Department of Neuropsychiatry, Hanyang University College of Medicine, Korea. 6Department of Family Medicine1, Sungkyunkwan University School of Medicine, Korea. |
Corresponding Author:
Jungkwon Lee, Tel: 02)3410-2441, Fax: 02)3410-0388, Email: jkwonl@smc.samsung.co.kr |
|
|
|
ABSTRACT |
PURPOSE: A high degree of agreement between standardized patients (SP) check-list recordings and those of faculty will be necessary if SPs are to eventually replace faculties in the OSCE evaluaton process. This study was conducted to know to what degree SPs' checklist recordings agree with those of faculties during an OSCE. METHODS: One hundred and twenty one fourth-year medical students of Hanyang University College of Medicine took an OSCE. In each of two study stations, a student saw an SP for four minutes and the SP recorded the same checklists as a faculty examiner did, for the following fifty seconds. RESULTS: For the 'bad news delivery' station, SP evaluations were more lenient compared to those of faculties (56 vs 45, p< 0.01), but in the case of 'chest pain', there was no significant difference.
Pearson correlation coefficients for the 'bad news delivery' station and for the 'chest pain' case were 0.60 and 0.65, respectively. The mean percentages of agreement for the 'bad news delivery' and the 'chest pain' checklists were 71% and 82%, respectively. The mean kappa statistics for the 'bad news delivery' and the 'chest pain' check-lists were 0.19 and 0.49, respectively. CONCLUSION: The ratings by SPs were found to be consistent with those of faculties only in moderate degree. The exactness of scoring criteria, and the optimal SP training are to be the premise for the replacement of faculties by SPs during OSCE checklist recordings. |
Keywords:
Clinical competence;Observer variation;Educational measurement;Patient simulation |
|