AbstractPurposeThis study was designed to develop an evaluation tool for assessing professional behavior and clinical competencies from the graduates’ perspective.
MethodsThis study employed mixed method in a sequential exploratory design. The semi-structured interviews were conducted with three graduates from different cohorts. The qualitative analysis of the interviews found six emerging themes for professional behavior and clinical competencies development. These themes were then developed into a 55-item questionnaire. The questionnaire was then distributed to 84 medical graduates for exploratory factor analysis (EFA) from February to April 2019. The quantitative data were analyzed using IBM SPSS ver. 21.0 (IBM Corp., Armonk, USA) for principal axis factoring. After conducting EFA, we proceeded with confirmatory factor analysis (CFA) with another 120 graduates to validate the tool.
ResultsEighty-four graduates completed the questionnaire for EFA. Upon completion of EFA, 35 out of 55 items of the questionnaire were found to be valid and reliable. The most appropriate fit was seven factors, which explained 58.18% of variance between them after 15 iterations with Cronbach’s α of 0.916. The personal satisfaction factor was noted to be weak. It was therefore added to patient management factor due to its similar intention. The final EFA factor after the modification was six. The CFA found that 34 out of 35 items was valid and reliable that representation of the latent variables.
IntroductionMedical professionalism is a normative system that organizes and delivers health care. It calls upon its group members to jointly declare (“profess”) what the public and individual patients expected, based on shared competency standards and ethical values. These standards and values ensure that all medical professional will deliver quality care to their patients [1]. Professionalism is understood to exist or develop in varying degrees as a characteristic or attribute that is identifiable within individuals. In other words, professionalism can be conceptualized and assessed at different levels: individual, interpersonal and institutional/societal [2].
The professional behavior and clinical competency of a doctor reflect a range of personal and interpersonal qualities, attributes, behaviors, commitments, and values [3-6]. Doctors are expected to be accountable, altruistic, committed to excellence, compassionate, respectful, responsive, sensitive to diversity, and ethically sound [3,6].
Many evaluation tools have been developed to assess a doctor’s professional behavior. For example, Professional Mini Evaluation Exercise assesses the medical residency’s professional behavior from the clinical supervisor’s perspective [7,8]. Our search of the literature, however, could not find a tool that was developed to assess professional behavior and clinical competencies development from a graduate perspective, although they are one of the stakeholders in medical education [9]. As self-assessment is an essential factor in enhancing and improving professional ability [10,11], graduates can provide valuable input to the faculty for improving a medical education program because they have undergone the curriculum and have the means to assess its effectiveness for their medical practice. Therefore, there is a need to develop a tool to assess professional behavior and clinical competencies development from the graduates’ perspectives.
We have published our work on graduates’ performance evaluation from the perspectives of the preceptors and patients [11,12]. In this article, we will elaborate on the graduates’ perspective on their professional behavior and clinical competencies development. The graduates’ perspective of professional behavior and clinical competencies is important in helping Universitas Islam Bandung to evaluate its educational outcome achievement as the graduates can further elaborate on the behaviors and competencies that were needed from their workplace-based experience.
This study aimed to develop a tool for assessing professional behavior and clinical competencies development from the graduates’ perspective.
MethodsThe development of the questionnaire involved three aspects: (1) the study participants; (2) the factor analysis, with 20% of target population as the sample size [13] or between 10:1 to 20:1 subject to factor ratio [14,15]; and (3) the unstandardized parameter and their standard error of mean [16].
1. Study designUsing a mixed-method sequential exploratory design, a qualitative study preceded the tool (questionnaire) development [17]. It was followed by exploratory factor analysis (EFA). The results of the EFA guided questionnaire revision. The revised questionnaire was then distributed for confirmatory factor analysis (CFA). Fig. 1 summarizes the steps involved in the study.
2. Sampling strategy and study participantsA total of 207 graduates were involved in the study. The qualitative study used purposive sampling while EFA and CFA used random sampling strategies. The qualitative study involved three graduates for semistructured interviews. The EFA involved 84 graduates while the CFA involved another 120 graduates.
3. Study proceduresThe qualitative study involved three medical graduates from batches 2006, 2007, and 2008. Upon receiving written consent, they were interviewed between 45 to 60 minutes per person. Examples of the questions asked to the graduates were as shown in Fig. 2. The interview questions were formulated based on the literature review of the topic and the research objectives. It was reviewed many times by the research team to ensure its clarity. The qualitative study preceded tool development because it was necessary to explore the graduates’ perception of their professional behavior and clinical competencies development first. All interview transcripts were transcribed verbatim. Thematic analysis of the transcripts found seven emerging themes. The themes were (1) professional behavior, (2) humanity, (3) patient management, (4) clinical skills competence, (5) personal satisfaction, (6) cognitive competence, and (7) interpersonal skill. The description of the theme is as listed in Table 1 [18,19].
The themes that emerged from the interviews were then developed into a 55-item questionnaire based on the seven steps recommended by Stenfors-Hayes et al. [16]. The seven steps were familiarization, condensation, comparison, grouping, articulating, labelling, and contrasting. The 55-items questionnaire consists of eight items for professionalism, 11 items for humanity, five items for patient management, 12 items for clinical skill competence, four items for personal satisfaction, six items for cognitive competence, and nine items for interpersonal skill. Table 2 illustrated the items for each theme and the source of references [1-3,5,20-23].
Results1. Exploratory factor analysisThe 55-item questionnaire was later distributed to 84 medical graduates using with a participant to factor ratio of 12:1 [15]. It was distributed for 3 months from February to April 2019. Prior to conducting EFA, Kaiser-Myer-Olkin (KMO) counting and Bartlett’s test of sphericity were performed to obtain sampling adequacy [24]. The KMO counting showed an adequate sample has been achieved if it has a value above 0.5 or 0.6. After obtaining an adequate sample size with a KMO value of 0.546 (>0.5) and Bartlett’s test of sphericity of p<0.001 (χ2=3,682.84), the extraction method was conducted to acquire and describe the data structure [13,14] using IBM SPSS ver. 21.0 (IBM Corp., Armonk, USA). Confirmation of the data extraction that was able to be loaded in the same factor or scale was done using principal axis factoring of varimax rotation with Kaiser normalization.
The varimax rotation method with Kaiser normalization on EFA extracted seven factors based on eigenvalue of more than one (Table 3, Fig. 3). These seven factors were retained initially in the questionnaire and accounted for 58.18% of the total variance. In Table 4, the matrix pattern of the varimax rotation method with Kaiser normalization showed the grouping of the items based on a loading value of >0.5. For example, question 11, 12, 13, 15, and 16 were categorized into factor 4 due to their loading values. Other items were classified in the same manner.
Thirty-five items were retained as latent/potential variables (loading value of >0.5 except item Q3 which had a value of 0.498). Q3 item was retained in the questionnaire because it provided the purpose to assess professional capacity. Twenty items were removed because their loading values were less than 0.5.
The calculation of internal consistency was conducted using Cronbach’s α coefficient [9,10] and interpreted according to the five categories recommended by Guilford and Fruchter [25] in 1978. The 35-item questionnaire had a very high internal consistency as evidenced by Cronbach’s α of 0.916 (Table 5) [14,26,27]. However, upon completion of EFA, one of the seven factors was noted to be weak and was added to another factor. Therefore, only 35 items within six factors were retained in the questionnaire, as demonstrated in Table 6. The list of questions for the revised questionnaire used in CFA is shown in Table 7 [28-31].
2. Confirmatory factor analysisCFA is a part of structural equation modelling. Upon completion of EFA, CFA was then conducted on another 120 graduates to validate the construct of the evaluation tool. The sample size for CFA was based on participant to factor ratio of 20:1. The CFA was conducted using LISREL ver. 8.8 program for Windows (Scientific Software International Inc., Lincolnwood, USA). The CFA results were presented in Fig. 3. The t-loading value of all items in each group of factors was counted and was stated as “valid” if it has a value above 1.96 (at 95% confidence interval). The highest t-loading value was deemed as the most reliable indicator that represents its latent variable.
Fig. 3 showed that 34 out of 35 items (indicator variable) were valid and reliable in representing the six latent variables. Humanity variable was represented by item C2-C7. Item C7 was the strongest indicator with a t-value of 12.22. Cognitive competence variable was represented by item C8-C10. The strongest indicator was item C10 with t-value of 7.31. Clinical skills competence was represented by items C11 to C16 with item C11 being the strongest indicator with a t-value of 9.14. Professional behavior variable was represented by item C17 to C25. Item C20 was the strongest indicator with t-value of 8.62. Patient management variable was represented by item C26-C33. Item C29 was the strongest indicator with t-value of 9.88. Inter-personal skill variable was represented by item C34-C35. Item C34 was a stronger indicator with t-value of 8.98.
DiscussionDavid Kern stated that the process of curriculum development and evaluation requires an assessment of the stakeholders’ needs, in order to determine what aspects should be included in the curriculum [26]. This study was conducted to develop an evaluation tool for assessing the development of professional behavior and clinical competencies from the graduates’ perspective. The data collected from this tool will be used by UNISBA to evaluate its current medical curriculum.
The seven factors obtained from conducting EFA with 35-item questionnaire indicated high communality in which almost all items have the value of more than 0.7 [27]. The factor analysis yielded nine items for professional behavior, seven items for humanity, seven items for patient management, six items for clinical skill competence, one item for personal satisfaction, three items for cognitive competence, and two items for interpersonal skill.
Five of the seven factors identified in this study were strong factors because they have three or more items except for interpersonal skill factor and personal satisfaction factor. The one item for personal satisfaction was then inserted to patient management factor because it shared similar intention. The interpersonal skill factor remained as it is with two items. Therefore, the final extracted factors were six.
The factor which had the most items was factor 1 (professional behavior). All nine items in factor 1 were retained. The item with the highest loading value was item 34 (0.802) in factor 2 (humanity). Items included in patient management factor were aspects and attitudes that the graduates perceived they must have. These factors were extracted from the semi-structured interviews. As doctor-patient communication is one of the aspects known to increase patient’s adherence to medical treatment and enhanced clinical outcomes [11,32], its inclusion in the questionnaire was deemed necessary.
It was surprising that only two items were retained in the interpersonal skill factor [3], although this skill was important for medical practitioners [7]. A possible explanation could be due to the paternalistic way of managing patient in our country [33].
Most of the factors identified in this study were similar to the competency domains of the CanMEDS Physician Competency Framework [28,29] and the Accreditation Council for Graduate Medical Education [30] competencies except for humanity factor (Table 7) [28-31]. Humanity was chosen as one of the factors because it represented the vision and mission of UNISBA’s Medical School. Therefore, this factor might not be of interest to other institutions unless they have similar vision and mission.
1. Study limitationOur study has several limitations. Firstly, the sample size of EFA was small due to a limited number of graduates available for the study. Second, the interview was conducted in Bahasa Indonesia. There could be a misinterpretation of meaning when the emerging themes were translated into English. Third, the original 55-item and 35-item questionnaires were distributed to the graduates in Bahasa Indonesia. There could also be a misinterpretation of meaning when the factors extracted from the literature was translated from English to Bahasa Indonesia for the questionnaire, and when it was translated again into English for publication.
2. ConclusionValidity and reliability are essential psychometric properties of an evaluation tool. A high index of validity and reliability of an evaluation tool indicated that the tool was well constructed. The validity and reliability index obtained by this tool signified that this tool is valid and reliable in assessing the professional behavior and clinical competencies development from the graduates’ perspective. It also indicated that the evaluation tool can be valid and reliable to be used for collecting data as a source of information for curriculum evaluation exercise. Future work will investigate the similarities and differences of perspectives by all stakeholders in evaluating UNISBA’s medical program.
AcknowledgmentsWe would like to thank the Bandung Islamic University School of Medicine Alumni Association (Ikatan Alumni Medical School of UNISBA) for their support in the curriculum evaluation process. Our special gratitude goes to the Chief of Alumni Association, Dr. Surya Santosa MD.
Table 1.
Table 2.
Table 3.Table 4.
Table 5.
Table 6.
Table 7.
References1. Arnold L. Assessing professional behavior: yesterday, today, and tomorrow. Acad Med 2002;77(6):502-515.
2. Hodges BD, Ginsburg S, Cruess R, et al. Assessment of professionalism: recommendations from the Ottawa 2010 Conference. Med Teach 2011;33(5):354-363.
3. Adam J, Bore M, McKendree J, Munro D, Powis D. Can personal qualities of medical students predict in-course examination success and professional behaviour?: an exploratory prospective cohort study. BMC Med Educ 2012;12:69.
4. Lynch DC, Surdyk PM, Eiser AR. Assessing professionalism: a review of the literature. Med Teach 2004;26(4):366-373.
5. Mueller PS. Incorporating professionalism into medical education: the Mayo Clinic experience. Keio J Med 2009;58(3):133-143.
6. Cruess R, McIlroy JH, Cruess S, Ginsburg S, Steinert Y. The professionalism mini-evaluation exercise: a preliminary investigation. Acad Med 2006;81(10 Suppl):S74-S78.
7. Tsugawa Y, Tokuda Y, Ohbu S, et al. Professionalism mini-evaluation exercise for medical residents in Japan: a pilot study. Med Educ 2009;43(10):968-978.
8. Ruhe V, Boudreau JD. The 2011 Program Evaluation Standards: a framework for quality in medical education programme evaluations. J Eval Clin Pract 2013;19(5):925-932.
9. Dilmore TC, Rubio DM, Cohen E, et al. Psychometric properties of the mentor role instrument when used in an academic medicine setting. Clin Transl Sci 2010;3(3):104-108.
10. Yeo S, Chang BH. Students’ self-assessment of achievement of terminal competency and 4-year trend of student evaluation on outcome-based education. Korean J Med Educ 2019;31(1):39-50.
11. Kusmiati M, Hamid NA, Sanip S, Emilia O. Development of an instrument for preceptor evaluation of medical graduates’ performance: the psychometric properties. Med Sci Educ 2019;29(4):935-940.
12. Kusmiati M, Bahari R, Hamid NA, Sanip S, Emilia O. Validation of patient perception instruments for junior doctor performance: a factor analysis. Glob Med Health Commun 2019;7(1):71-80.
13. Stalmeijer RE, Dolmans DH, Wolfhagen IH, Muijtjens AM, Scherpbier AJ. The development of an instrument for evaluating clinical teachers: involving stakeholders to determine content validity. Med Teach 2008;30(8):e272. e277.
14. Boerebach BC, Lombarts KM, Arah OA. Confirmatory factor analysis of the system for evaluation of teaching qualities (SETQ) in graduate medical training. Eval Health Prof 2016;39(1):21-32.
15. Anthoine E, Moret L, Regnault A, Sébille V, Hardouin JB. Sample size used to validate a scale: a review of publications on newly-developed patient reported outcomes measures. Health Qual Life Outcomes 2014;12:176.
16. Stenfors-Hayes T, Hult H, Dahlgren LO. What does it mean to be a good teacher and clinical supervisor in medical education? Adv Health Sci Educ Theory Pract 2011;16(2):197-210.
17. Pluye P, Hong QN. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews. Annu Rev Public Health 2014;35:29-45.
18. Lesser CS, Lucey CR, Egener B, Braddock CH 3rd, Linas SL, Levinson W. A behavioral and systems view of professionalism. JAMA 2010;304(24):2732-2737.
19. Kasule OH. Medical professionalism and professional organizations. J Taibah Univ Med Sci 2013;8(3):137-141.
20. Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA 2002;287(2):226-235.
21. Campbell EG, Regan S, Gruen RL, et al. Professionalism in medicine: results of a national survey of physicians. Ann Intern Med 2007;147(11):795-802.
22. Haque M, Zulkifli Z, Haque SZ, et al. Professionalism perspectives among medical students of a novel medical graduate school in Malaysia. Adv Med Educ Pract 2016;7:407-422.
23. Bahaziq W, Crosby E. Physician professional behaviour affects outcomes: a framework for teaching professionalism during anesthesia residency. Can J Anaesth 2011;58(11):1039-1050.
24. Worthington RL, Whittaker TA. Scale development research: a content analysis and recommendations for best practices. Couns Psychol 2006;34(6):806-838.
25. Guilford JP, Fruchter B. Fundamental statistics in psychology and education. 6th ed. New York, USA: McGraw-Hill; 1978.
26. Sweet LR, Palazzi DL. Application of Kern’s six-step approach to curriculum development by global health residents. Educ Health (Abingdon) 2015;28(2):138-141.
27. Shirali G, Shekari M, Angali KA. Assessing reliability and validity of an instrument for measuring resilience safety culture in sociotechnical systems. Saf Health Work 2018;9(3):296-307.
28. Frank JR. The CanMEDS 2005 Physician Competency Framework: better standards, better physicians, better care. Ottawa, Canada: Royal College of Physicians and Surgeons of Canada; 2005.
29. Frank JR, Snell L, Sherbino J. CanMEDS 2015 Physician Competency Framework. Ottawa, Canada: Royal College of Physicians and Surgeons of Canada; 2015.
30. Swing SR. The ACGME outcome project: retrospective and prospective. Med Teach 2007;29(7):648-654.
31. Malik MU, Diaz Voss Varela DA, Stewart CM, et al. Barriers to implementing the ACGME outcome project: a systematic review of program director surveys. J Grad Med Educ 2012;4(4):425-433.
32. Costello AB, Osborne J. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Pract Assess Res Eval 2005;10(1):7.
|
|