| Home | E-Submission | Sitemap | Contact us |  
top_img
Korean J Med Educ > Volume 37(1); 2025 > Article
Kim: Relationship between intern performance assessed by peers and academic performance in medical school: a preliminary study

Abstract

Purpose

This study investigated the association between intern doctors’ performance as assessed by their peers and their academic performance in medical school.

Methods

A retrospective cohort analysis was conducted with 21 graduates from a South Korean medical school who interned at an affiliated center. Participants underwent bi-annual peer evaluation for intern performance evaluations using a 5-point Likert scale on professionalism, clinical competencies, and interpersonal skills. Associations between peer ratings and grade point average (GPA), exit assessment scores, and Korean Medical Licensing Examination (KMLE) scores were analyzed.

Results

Peer ratings showed moderate to strong positive associations with exit assessments and KMLE scores, but no relationship with cumulative GPAs. Peer ratings correlated more strongly with objective structured clinical examinations than written tests.

Conclusion

Medical students’ outcomes in exit assessments and KMLEs, especially clinical performance tests, are strong predictors of their performance as intern doctors. These findings highlight the value of clinical performance assessments for predicting intern doctors’ performance and suggest the need for more comprehensive and authentic assessment methods to enhance their predictive validity.

Introduction

There is a growing interest in investigating factors that predict intern doctors’ performance to help select those who have the required competencies and are likely to succeed in the postgraduate medical education program. Academic factors such as medical licensing examination scores and medical school grade point averages (GPAs) are known to be associated with intern doctors’ performance [1-3]. A systemic review of studies on medical students’ summative assessments indicates their objective structured clinical examinations (OSCEs) and written examination scores have significant relationships with their clinical performance in the trainee intern year [4]. Yet, using these assessments as predictive measures for intern performance is limited due to a small body of evidence and large variations in the predictive strength of the relationships identified [4]. Furthermore, research indicates factors such as the type of assessment and the context in which it is conducted can influence this relationship [5,6].
There has been research exploring the relationship between junior doctors’ performance and their academic achievements in medical school using various measurement tools. Some studies of workplace performance assessment of junior doctors using the Junior Doctor Assessment Tool found significant correlations with their academic performance in medical school (i.e., GPA, clinical attachment, and written exam scores) [5,7]. Another studies highlighted the association between sociodemographic factors of UK foundation program doctors with their workplace performance using educational performance measure decile as an outcome measure [8] and between medical student selection criteria and workplace performance of junior doctors, as measured by supervising consultants’ reports [6].
Among various measures of junior doctors’ workplace performance, literature indicates that peer evaluation is a feasible tool for assessing physician competency [9-11]. Peer evaluation among medical students or healthcare professionals has existed for centuries, which assesses physicians’ performance based on peer’s judgments on observations they have made in the clinical setting over time [9]. Although there are several different ways of conducting peer evaluation, a structured questionnaire is often used as a workplace-based assessment tool [12]. Peer evaluation is best used as part of a multi-source approach to performance assessment in areas of clinical practice, humanistic qualities, and communication skills, as medical knowledge alone is not an adequate predictor of doctors’ performance [13]. However, peer evaluation can be subject to rater bias, which is not only affected by the rater’s standards but may also be influenced by personal relationships, stakes, and equivalence [9,11].
Although studies have been conducted on factors associated with intern doctors’ workplace performance, research is scant on its relationship with their academic achievements in medical school using measurement tools other than clinical knowledge or evaluations received by program directors. This study investigated the relationship between medical interns’ performance as assessed by their peers and their academic achievements in medical school as a preliminary study on factors predicting workplace performance in the intern year.

Methods

1. Study participants and setting

A retrospective study was conducted with a cohort of medical interns who had graduated from a private medical school and completed their internship training at an affiliated academic center in a suburb of Seoul, Korea. The study sample was 53 students who were admitted to the medical school in March 2019. Among these students, those who had matriculated in February 2023 and completed their internship training at the affiliated academic center in February 2024 were included in this study.
Participants matriculated in the same year but were admitted in two different admission tracks. Thirty participants (56%) were graduate-entry students and 23 (44%) were from the accelerated program, in which they had attended a 3-year undergraduate program prior to the basic medical education program. However, participants underwent the same basic medical education curriculum, which was composed of 2-year pre-clinical education and 2 years of clinical clerkships. Those who had not matriculated in the same year due to remediation or other reasons or had chosen careers other than the internship at the affiliated medical center were excluded from the study.

2. Research instruments and procedures

Participants’ peer evaluation scores were obtained from the academic medical center under investigation. An existing assessment tool developed and implemented by the education and training department at the medical center was used for peer evaluation in this study. This peer evaluation is part of the assessment program for intern performance alongside monthly performance assessments by training staff. This assessment program is governed by the education and training department at the medical center and is conducted as a summative evaluation for the completion of the internship program.
This peer evaluation was conducted bi-annually during the 1-year internship program, in which each intern evaluated all their peers using a three-item questionnaire. The questionnaire consisted of three items that related to competency areas expected of interns, namely professionalism, clinical competencies, and interpersonal skills, and were rated on a 5-point Likert scale (1=very poor, 5=very excellent). In this peer evaluation, interns evaluate their peers based on their observations and experiences working with them in the workplace, which is the “Does” level in the framework by Miller [14] for evaluation of clinical competence.
Additionally, participants’ academic achievements at the medical school, which included GPAs, exit assessments and Korean Medical Licensing Examination (KMLE) scores, were obtained for analysis. The exit assessments were an accumulation of multiple assessments conducted throughout the fourth year of medical school to evaluate student achievement of exit outcomes. Those exit assessments included five written tests implemented in the form of clinical knowledge mock exams and three clinical performance tests (i.e., OSCEs) on various cases of clinical encounters. Participants’ KMLE scores were provided by them after gaining consent from them at the time of graduation.

3. Data analysis

Participants’ peer ratings were analyzed by performing descriptive statistics. Cronbach’s α was calculated to establish the internal consistency of the items. Differences in peer ratings by participants’ demographics and type of admission were analyzed using independent t-tests. Differences in participants’ baseline performance were analyzed by comparing their cumulative GPAs, exit assessment and KMLE scores across groups using independent t-tests. Pearson’s correlation coefficients were calculated to examine the associations between participants’ peer ratings and their academic achievements at the medical school.

4. Ethical considerations

Ethical approval was provided by the institutional review board (IRB) of Dongguk University Gyeongju (DGU IRB 2020003), and informed consent was obtained from the participants while they were in medical school. Participants were informed that they could withdraw from the study at any time without any negative consequences.

Results

1. Participants

Table 1 presents participant demographics and their academic achievement in medical school. Among the 53 medical students eligible for this study, 21 (female, 5 [24%]; male, 16 [76%]) were included in this cohort. Twelve (57%) had graduated from the accelerated program and 9 (43%) were from the graduate-entry program. Participants’ ages ranged from 26 to 37 years, with a median age of 27 years. Participants did not differ in their baseline performance at the medical school in terms of GPAs, exit assessment outcomes and KMLE scores.

2. Participants’ peer evaluation scores across demographics

Participants’ overall workplace performance scores as evaluated by peers were 3.77 (standard deviation=0.59) on average. Their performance scores across three competency areas ranged from 3.77 to 3.79. Table 2 compares participants’ peer evaluation scores across different demographics. Participants’ peer ratings did not differ across gender, age, or entry-level groups. Cronbach’s α was 0.92, which demonstrates a high level of internal consistency of the items.

3. Relationship between participants’ performance as interns and their academic achievements in medical school

Table 3 presents relationship between participants’ performance as interns and their academic achievements in medical school. Participants’ peer ratings had a moderate to strong association with their scores in the exit assessments and KLME, which showed a stronger association than with their medical school GPAs. In particular, participant peer ratings were more strongly associated with their achievements in OSCEs than with those in the written tests.

Discussion

This study investigated intern doctors’ performance assessed by their peers and its association with their academic achievements in medical school as a preliminary study of factors predicting their performance in the intern year. Participants’ peer ratings did not differ across demographics, which indicates medical students’ demographics do not predict their performance as interns. Participants’ peer ratings showed moderate to strong positive associations with academic achievements at the medical school. Among these achievements, exit assessment and KMLE scores showed stronger associations with interns’ peer ratings than with their medical school GPAs. In particular, peer ratings were more strongly associated with outcomes in OSCEs than those in written tests in the medical school. This finding indicates medical students’ achievements in clinical performance tests is a stronger predictor of their performance as intern doctors. These findings align with previous studies, which indicate that medical students’ clinical performance in medical school is a more valid predictor of their workplace performance as doctors than written tests [4].
This study indicates clinical performance assessments in medical school are a valid method for predicting medical students’ performance as doctors. However, it can be argued that the current format of the OSCE is limited in predicting medical students’ performance as interns because it does not fully mimic real-life situations [15]. In the OSCE, students are expected to perform in a clinical setting independently, whereas junior doctors often do their work as part of a clinical team under the supervision of senior medical staff. Therefore, it is suggested that more authentic scenarios be provided in the OSCE to assess medical students’ clinical competence to make it an even more valid and reliable measure for predicting their workplace performance as interns. Moreover, more comprehensive and authentic assessment methods for medical students are warranted to enhance their predictive validity. For doing so, it is suggested that more various workplacebased assessments be implemented in medical schools to assess medical students’ clinical performance at the ‘Does’ level in the hierarchy of clinical competence assessment [14].
The limitations of this study should be acknowledged, as they also have implications for future studies. First, this was a preliminary study with a small sample, which limits the generalizability of the findings. A regression analysis study with a larger sample is needed to validate factors for predicting intern performance. Second, this study measured intern performance from the perspective of their peers, which can be subject to several biases when used alone as an assessment tool [9,11]. Thus, a more comprehensive study of factors that predict intern performance using multiple sources of feedback on intern performance is warranted to better understand the relationship between them.

Notes

Acknowledgements
None.
Funding
No financial support was received for this study.
Conflicts of interest
No potential conflict of interest relevant to this article was reported.
Author contributions
All work was performed by Kyong-Jee Kim.

Table 1.
Participants’ Academic Achievements across Different Groups
Variable Medical school GPAs
Exit assessments
KLME
Year 1
Year 2
Year 3
Year 4
Written tests
OSCEs
Written tests
OSCEs
Mean±SD t (p) Mean±SD t (p) Mean±SD t (p) Mean±SD t (p) Mean±SD t (p) Mean±SD t (p) Mean±SD t (p) Mean±SD t (p)
Gender 1.03 (0.35) 0.98 (0.37) 1.26 (0.27) 1.10 (0.32) 0.20 (0.18) 1.84 (0.11) 0.12 (0.68) 1.23 (0.24)
 Female (n=5) 3.24±0.80 3.36±0.50 3.94±0.41 3.97±0.29 388.00±52.72 255.40±10.53 159.20±12.28 826.06±23.35
 Male (n=16) 3.17±0.53 3.31±0.48 3.50±0.35 3.79±0.29 383.0±35.74 245.13±12.13 159.88±8.32 805.45±52.12
Entry-level 0.57 (0.58) 0.91 (0.07) 0.80 (0.43) 0.76 (0.46) 1.89 (0.07) 0.34 (0.74) 2.03 (0.06) 0.01 (0.10)
 Accelerated program (n=12) 3.31±0.55 3.40±0.50 3.64±5.1 3.87±0.34 397.17±38.05 248.33±14.90 163.00±7.93 810.44±53.31
 Graduate-entry (n=9) 3.03±0.63 3.20±0.43 3.56±0.21 3.77±0.18 366.89±34.87 246.56±8.60 155.33±9.00 810.25±40.73
Age (yr) 1.37 (0.12) 1.94 (0.19) 1.62 (0.70) 0.65 (0.13) 1.71 (0.11) 1.36 (0.19) 2.36 (0.30) 0.73 (0.48)
 ≤ 27 (n=11) 3.92±0.51 3.97±0.36 3.92±0.33 3.94±0.40 397.27±39.91 251.00±12.26 163.73±7.89 817.55±49.59
 >27 (n=10) 3.60±0.57 3.59±0.51 3.60±0.54 3.60±0.54 369.80±34.14 243.80±11.91 155.30±8.49 802.45±45.65

GPA: Grade point average, KMLE: Korean Medical Licensing Examination, OSCE: Objective structured clinical examinations, SD: Standard deviation.

Table 2.
Differences in Intern Peer Evaluation Scores Across Demographic Characteristics
Variable Peer evaluation scores
Professionalism
Clinical competencies
Interpersonal skills
Total
Mean±SD t (p) Mean±SD t (p) Mean±SD t (p) Mean±SD t (p)
Gender 1.03 (0.35) 0.98 (0.37) 1.26 (0.27) 1.10 (0.32)
 Female (n=5) 3.52±0.67 3.57±0.61 3.48±0.65 3.52±0.64
 Male (n=16) 3.85±0.51 3.86±0.42 3.86±0.37 3.86±0.43
Entry-level 0.57 (0.58) 0.91 (0.07) 0.80 (0.43) 0.76 (0.46)
 Accelerated program (n=12) 3.83±0.59 3.68±0.38 3.84±0.52 3.85±0.49
 Graduate-entry (n=9) 3.69±0.52 3.87±0.49 3.67±0.42 3.68±0.49
Age (yr) 1.37 (0.12) 1.94 (0.19) 1.62 (0.70) 0.65 (0.13)
 ≤ 27 (n=11) 3.92±0.51 3.97±0.36 3.92±0.33 3.94±0.40
 >27 (n=10) 3.60±0.57 3.59±0.51 3.60±0.54 3.60±0.54

SD: Standard deviation.

Table 3.
Association between Intern Performance Assessed by Peers and Their Academic Achievements in Medical School
Academic performance in medical school Peer evaluation scores (Pearson’s r)
Professionalism Clinical competencies Interpersonal skills Total
Year 1 GPA 0.379 0.369 0.314 0.361
Year 2 GPA 0.259 0.284 0.195 0.250
Year 3 GPA 0.335 0.306 0.190 0.284
Year 4 GPA 0.522* 0.553** 0.396 0.498*
Cumulative GPA 0.394 0.396 0.295 0.368
Exit assessments
 Written tests 0.500* 0.514* 0.466* 0.501*
 OSCEs 0.634** 0.677** 0.531* 0.623**
KMLE
 Written test 0.500* 0.514* 0.466* 0.501*
 OSCEs 0.623** 0.634** 0.677** 0.531*

GPA: Grade point average, OSCE: Objective structured clinical examinations, KMLE: Korean Medical Licensing Examination.

* p<0.05.

** p<0.01.

References

1. Filiberto AC, Cooper LA, Loftus TJ, Samant SS, Sarosi GA Jr, Tan SA. Objective predictors of intern performance. BMC Med Educ. 2021;21(1):77.
crossref pmid pmc pdf
2. Taylor ML, Blue AV, Mainous AG 3rd, Geesey ME, Basco WT Jr. The relationship between the National Board of Medical Examiners’ prototype of the Step 2 clinical skills exam and interns’ performance. Acad Med. 2005;80(5):496-501.
crossref pmid
3. Yun JY, Ryu H, Kim JW, et al. Subtyping of performance trajectory during medical school, medical internship, and the first year of residency in training physicians: a longitudinal cohort study. J Korean Med Sci. 2024;39(33):e239.
crossref pmid pmc pdf
4. Wilkinson TJ, Frampton CM. Comprehensive undergraduate medical assessments improve prediction of clinical performance. Med Educ. 2004;38(10):1111-1116.
crossref pmid
5. Carr SE, Celenza A, Puddey IB, Lake F. Relationships between academic performance of medical students and their workplace performance as junior doctors. BMC Med Educ. 2014;14:157.
crossref pmid pmc pdf
6. Sladek RM, Burdeniuk C, Jones A, Forsyth K, Bond MJ. Medical student selection criteria and junior doctor workplace performance. BMC Med Educ. 2019;19(1):384.
crossref pmid pmc pdf
7. Carr SE, Celenza A, Mercer AM, Lake F, Puddey IB. Predicting performance of junior doctors: association of workplace based assessment with demographic characteristics, emotional intelligence, selection scores, and undergraduate academic performance. Med Teach. 2018;40(11):1175-1182.
crossref pmid
8. Kumwenda B, Cleland JA, Walker K, Lee AJ, Greatrix R. The relationship between school type and academic performance at medical school: a national, multi-cohort study. BMJ Open. 2017;7(8):e016291.
crossref pmid pmc
9. Norcini JJ. Peer assessment of competence. Med Educ. 2003;37(6):539-543.
crossref pmid
10. Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. JAMA. 1993;269(13):1655-1660.
crossref pmid
11. Ramsey PG, Carline JD, Blank LL, Wenrich MD. Feasibility of hospital-based use of peer ratings to evaluate the performances of practicing physicians. Acad Med. 1996;71(4):364-370.
crossref pmid
12. Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE guide no. 31. Med Teach. 2007;29(9):855-871.
crossref pmid
13. Arnold L, Stern DT. Content and context of peer assessment. In: Stern DT, ed. Measuring Medical Professionalism. New York, USA: Oxford University Press; 2006:175-194.
crossref
14. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 Suppl):S63-S67.
crossref pmid
15. Wallace J, Rao R, Haslam R. Simulated patients and objective structured clinical examinations: review of their use in medical education. Adv Psychiatr Teat. 2002;8(5):342-348.
crossref
Editorial Office
The Korean Society of Medical Education
(204 Yenji-Dreamvile) 10 Daehak-ro, 1-gil, Jongno-gu, Seoul 03129, Korea
Tel: +82-32-458-2636   Fax: +82-32-458-2529
E-mail : kjme@ksmed.or.kr
About |  Browse Articles |  Current Issue |  For Authors and Reviewers
Copyright © 2025 by Korean Society of Medical Education.                 Developed in M2PI