Skip to main navigation Skip to main content
  • KSME
  • E-Submission

KJME : Korean Journal of Medical Education

OPEN ACCESS
ABOUT
BROWSE ARTICLES
FOR AUTHORS AND REVIEWERS

Page Path

14
results for

"Validity"

Article category

Keywords

Publication year

Authors

Funded articles

"Validity"

Original Research

Validation of performance evaluation indicator after graduation from medical school
Minkyung Oh
Korean J Med Educ 2025;37(4):419-427.
Published online November 27, 2025
DOI: https://doi.org/10.3946/kjme.2025.354
Purpose
Evaluating the performance of medical school graduates after graduation is important. However, reliable and comprehensive tools to evaluate the performance of medical professionals after graduation are lacking. The purpose of this study was to evaluate the validity and reliability of performance indicators for post-graduation competencies of medical school graduates.
Methods
Nineteen performance indicators were validated to evaluate competencies considering the talent image of a medical school, which are medical professionalism, clinical skills, communication, collaboration, and leadership. The reliability of the assessment tool was tested using Cronbach’s alpha, construct validity was evaluated through factor analysis, and content validity was evaluated using a Delphi expert panel.
Results
The overall reliability of the performance indicators was high, with a Cronbach’s alpha of 0.9110. Factor analysis revealed five core factors accounting for 70% of the total variance. These factors were classified as “collaboration,” “clinical professionalism,” “patient-centered care,” “professionalism,” and “systematic treatment and self-development.” Content validity was confirmed by the Delphi panel, and all items achieved a content validity ratio of 1, indicating strong content validity.
Conclusion
The developed performance indicators are reliable and valid tools for assessing the competencies of medical school graduates. These indicators can be used to evaluate the quality of medical education and to improve the curriculum. It is also important to establish a system to periodically assess competencies after graduation.
  • 504 View
  • 38 Download
Validity and reliability assessment of a peer evaluation method in team-based learning classes
Hyun Bae Yoon, Wan Beom Park, Sun-Jung Myung, Sang Hui Moon, Jun-Bean Park
Korean J Med Educ 2018;30(1):23-29.
Published online February 28, 2018
DOI: https://doi.org/10.3946/kjme.2018.78
Purpose
Team-based learning (TBL) is increasingly employed in medical education because of its potential to promote active group learning. In TBL, learners are usually asked to assess the contributions of peers within their group to ensure accountability. The purpose of this study is to assess the validity and reliability of a peer evaluation instrument that was used in TBL classes in a single medical school.
Methods
A total of 141 students were divided into 18 groups in 11 TBL classes. The students were asked to evaluate their peers in the group based on evaluation criteria that were provided to them. We analyzed the comments that were written for the highest and lowest achievers to assess the validity of the peer evaluation instrument. The reliability of the instrument was assessed by examining the agreement among peer ratings within each group of students via intraclass correlation coefficient (ICC) analysis.
Results
Most of the students provided reasonable and understandable comments for the high and low achievers within their group, and most of those comments were compatible with the evaluation criteria. The average ICC of each group ranged from 0.390 to 0.863, and the overall average was 0.659. There was no significant difference in inter-rater reliability according to the number of members in the group or the timing of the evaluation within the course.
Conclusion
The peer evaluation instrument that was used in the TBL classes was valid and reliable. Providing evaluation criteria and rules seemed to improve the validity and reliability of the instrument.

Citations

Citations to this article as recorded by  Crossref logo
  • Peer assessment in collaborative learning: A validated tool to enhance equity and engagement in nursing education
    Yujing Dong, Fangfang Du, Haiyan Yin, Shizheng Du
    Teaching and Learning in Nursing.2026; 21(1): e10.     CrossRef
  • Exploring an effective automated grading model with reliability detection for large‐scale online peer assessment
    Zirou Lin, Hanbing Yan, Li Zhao
    Journal of Computer Assisted Learning.2024; 40(4): 1535.     CrossRef
  • Improving learning experience through implementing standardized team-based learning process in undergraduate medical education
    Rebecca Andrews-Dickert, Ranjini Nagaraj, Lilian Zhan, Laura Knittig, Yuan Zhao
    BMC Medical Education.2024;[Epub]     CrossRef
  • Improving Peer Assessment Validity and Reliability Through a Fuzzy Coherence Measure
    Mohamed El Alaoui
    IEEE Transactions on Learning Technologies.2023; 16(6): 892.     CrossRef
  • Preparing first-year engineering students for cooperation in real-world projects
    Marietjie Havenga, Arthur James Swart
    European Journal of Engineering Education.2022; 47(4): 558.     CrossRef
  • The impact of asynchronous online anatomy teaching and smaller learning groups in the anatomy laboratory on medical students’ performance during the Covid‐19 pandemic
    Ming‐Fong Chang, Meng‐Lin Liao, June‐Horng Lue, Chi‐Chuan Yeh
    Anatomical Sciences Education.2022; 15(3): 476.     CrossRef
  • Reviewing and analyzing peer review Inter-Rater Reliability in a MOOC platform
    Felix Garcia-Loro, Sergio Martin, José A. Ruipérez-Valiente, Elio Sancristobal, Manuel Castro
    Computers & Education.2020; 154: 103894.     CrossRef
  • Evaluation of an e‐book assignment using Fink's Taxonomy of Significant Learning among undergraduate dental hygiene students
    Brian B. Partido, Elizabeth Chartier, Jennifer Jewell
    Journal of Dental Education.2020; 84(10): 1074.     CrossRef
  • A Novel Grading Strategy for Team‐Based Learning Exercises in a Hands‐on Course in Molecular Biology for Senior Undergraduate Underrepresented Students in Medicine Resulted in Stronger Student Performance
    Gonzalo A. Carrasco, Kathryn C. Behling, Osvaldo J. Lopez
    Biochemistry and Molecular Biology Education.2019; 47(2): 115.     CrossRef
  • Effect of problem based learning models on student skills in conducting validity and reliability test of objective question
    R D Wulaningsih
    Journal of Physics: Conference Series.2019; 1402(7): 077108.     CrossRef
  • 10,521 View
  • 230 Download
  • Crossref
  • 14 Scopus
Original Article
The Pedagogical Validity for a Six Years Curriculum in Pharmacy Education
Eunbae Yang, Tai Jin Shin, Sunghak Kim, Yohan Go, Seunghee Lee
Korean J Med Educ 2005;17(3):225-238.
Published online December 31, 2005
DOI: https://doi.org/10.3946/kjme.2005.17.3.225
PURPOSE
This study was to examine the pedagogical validity for a the six-years curriculum in pharmacy education in Korea. METHODS: The focus group, composed of 5 specialists, analyzed the pedagogical validity for a six-years curriculum from the perspective of administrative policies of higher education, educational sociology, curriculum composition, and educational economy. In addition, 3 consultants advised on the main issues related to the reformation of the school system in pharmacy education. RESULTS: It is not clear that the reformation of the school system in pharmacy education is aimed at undergraduate or graduate education in view of specialization of a higher education institute. The reformation of the school system is likely to cause educational inequality since a student who lacks financial support and cultural resources might give up entering pharmacy school. It also may ignite a struggle for power between pharmacists and physicians. The six- years curriculum is supposed to offer integrated experiences with a balance of theory and practice, representing characteristics such as consistency, clarity, reliability, and legitimacy. From the educational economy view, the validity of reformation of the school system can only be secured on the premise the expected income exceeds the total costs if the age-earning profile is constant in the current situation. CONCLUSION: Prior to discussions on the reformation of the school system in pharmacy education, the quality of pharmacy education should be improved first through multifarious efforts such as curriculum development, improvement to teaching and learning, introduction to an accreditation system, revision of the license examination, and graduate and continued pharmacy education and so on.

Citations

Citations to this article as recorded by  Crossref logo
  • Career Perspectives of Future Graduates of the Newly Implemented 6-year Pharmacy Educational System in South Korea
    Eunyoung Kim, Saurav Ghimire
    American Journal of Pharmaceutical Education.2013; 77(2): 37.     CrossRef
  • Experiences of Students of Nursing College in Transition From a Three-year to a Four-year Nursing Education System
    HackSun Kim, JinGyung Cha
    The Journal of Korean Academic Society of Nursing Education.2012; 18(3): 465.     CrossRef
  • Emerging frontiers of pharmacy education in Saudi Arabia: The metamorphosis in the last fifty years
    Yousif A. Asiri
    Saudi Pharmaceutical Journal.2011; 19(1): 1.     CrossRef
  • The PharmD Degree in Developing Countries
    Shazia Jamshed, Zaheer Ud Din Babar, Imran Masood
    American Journal of Pharmaceutical Education.2007; 71(6): 125.     CrossRef
  • 5,970 View
  • 36 Download
  • Crossref
Sharing of Information among Students and Its Effect on the Scores of Clinical Performance Examination (CPX)
Hoon Ki Park, Oh Jung Kwon
Korean J Med Educ 2005;17(2):185-196.
Published online August 31, 2005
DOI: https://doi.org/10.3946/kjme.2005.17.2.185
PURPOSE
During the high-stake examinations such as OSCE (
Objective
structured clinical examination) or CPX (clinical performance exam), test security is generally accepted as a major concern for test validity. This study was conducted to investigate the effect on examinee' s scores of repeated, serial administrations of essentially the same standardized patient (SP) -based performance exam. METHODS: A performance-based examination using eight SP cases was administered to 123 senior medical students at Hanyang University School of Medicine. Students were randomly assigned to one of 16 groups of 8 students each. Three groups were tested serially each day, requiring 5 days for the complete administration of the examination. We compared the mean scores of the five groups of the examinees tested on different days with ANOVA and linear trends with multiple regression analyses. RESULTS: For both checklist scores and written scores during the interstation work, the mean scores of the first day groups were significantly lower compared to subsequent groups. And, there were slight linear trends in the scores over the five days. Scores related to case-specific history taking, information sharing, and clinical courtesy were significantly affected by the sharing of information between students. Scores related to patient satisfaction, physical exam, and physician-patient interaction were not influenced by the same pattern of behaviour. CONCLUSION: Test security may be violated during SP-based performance exams even though the checklists are not accessible to the examinees. It would be desirable for the test-givers to prepare alternative forms of cases for maintaining the validity of SP-based performance exams.

Citations

Citations to this article as recorded by  Crossref logo
  • Education, Elderly Health, and Differential Population Aging in South Korea: A Demographic Approach
    Bongoh Kye, Erika Arenas, Graciela Teruel, Luis Rubalcava
    Demographic Research.2014; 30: 753.     CrossRef
  • Necessity of introducing postencounter note describing history and physical examination at clinical performance examination in Korea
    Jonghoon Kim
    Korean Journal of Medical Education.2014; 26(2): 107.     CrossRef
  • Experience of clinical skills assessment in the Busan-Gyeongnam Consortium
    Beesung Kam, Young Rim Oh, Sang Hwa Lee, Hye Rin Roh, Jong Ryeal Hahm, Sun Ju Im
    Korean Journal of Medical Education.2013; 25(4): 327.     CrossRef
  • Does sharing information before a clinical skills examination impact student performance?
    Jong Hoon Kim
    Medical Teacher.2010; 32(9): 747.     CrossRef
  • Correlations of Information Gathering Scores between Checklists and Interstation Works in a Clinical Performance Examination
    Jong Hoon Kim
    Korean Medical Education Review.2010; 12(2): 19.     CrossRef
  • Relationship between the Clinical Performance Examination and Associated Variables
    Kwi Hwa Park, Wook-Jin Chung, Duho Hong, Woon Kee Lee, Eak Kyun Shin
    Korean Journal of Medical Education.2009; 21(3): 269.     CrossRef
  • Inter-rater Reliability in a Clinical Performance Examination Using Multiple Standardized Patients for the Same Case
    Jinkyung Ko, Tai-Young Yoon, Jaehyun Park
    Korean Journal of Medical Education.2008; 20(1): 61.     CrossRef
  • The Comparison of Clinical Performance Examination Scores according to the Different Testing Time- Six Medical Schools in Seoul·Gyeonggi CPX Consortium 2005 -
    Jae-Jin Han, Hyesook Park, Ivo Kwon, Kyung-Ha Ryu, Eunkyung Eo, Najin Kim, Jaeeun Jung, Kyung Hyo Kim, Soon Nam Lee
    Korean Journal of Medical Education.2007; 19(1): 31.     CrossRef
  • Effects of Case Type and Standardized Patient Gender on Student Performance in a Clinical Performance Examination
    Jonghoon Kim, Kiyoung Lee, Dongmi Yoo, Eunbae Yang
    Korean Journal of Medical Education.2007; 19(1): 23.     CrossRef
  • The Correlation between CPX and Written Examination Scores in Medical Students
    Yera Hur, Sun Kim, Sung-Whan Park
    Korean Journal of Medical Education.2007; 19(4): 335.     CrossRef
  • 6,071 View
  • 62 Download
  • Crossref
Validating the Measurement Tool of Dental School Environment Satisfaction
Sun Kim, Hye Sook Kim
Korean J Med Educ 2003;15(3):195-202.
Published online December 31, 2003
DOI: https://doi.org/10.3946/kjme.2003.15.3.195
PURPOSE
The purpose of this study is to develop and verify an instrument to measure the satisfaction rate of dental school environment. In order to develop such tool a pretest and precedent study of records was carried along with the student group discussion. This was to enhance the application of the measurement to in practise. METHODS: The sub scales was developed to measure the satisfaction rate of the dental school environment and the item goodness of the tool was verified through reliability and factor analysis.
RESULTS
The instrument develop had measurement validity and when applied in practise it proved to be an instrument that can measure the dental school environment specifically and distinctively. CONCLUSION: The information given by the analysis of dental school environment in sub scales can be in practical use as the substantial evidence to diagnose the feature of relevant field, understand its problem and take a proper reform measure.
  • 4,017 View
  • 23 Download
Psychometric Analysis of Comprehensive Basic Medical Sciences Examination
Young Mee Lee, Yeon Hee So, Duck Sun Ahn, Ki Jong Rhee, Hyung Im
Korean J Med Educ 2002;14(2):301-306.
DOI: https://doi.org/10.3946/kjme.2002.14.2.301
PURPOSE
Since 2000, the Korea University Medical College has conducted Comprehensive Basic Medical Sciences Examination as a summative test. Summative assessment must be dependable in that it meets the highest standards of reliability and validity. The purpose of this study were to examine the validity and reliability of Comprehensive Basic Medical Sciences Examination and to improve the quality of the examination.
METHODS
The subject of this study was the examination materials and scores of the test. We conducted exploratory factor analysis to testify validity. Cronbach alpha coefficient was used to examine the reliability.
RESULTS
Only one factor was extracted from exploratory factor analysis. Its eigen value was 4.61 and it explained 65.93% of total variance. We could infer the extracted factor as an ability of basic medical sciences knowledge. The reliability coefficient of the test was ranged from 0.45 to 0.74. Of the total 335 item, overall acceptable items were 206(58.0%); the range of difficulty was 0.21~1.00 and discriminative indices were higher than 0.20.
CONCLUSION
We confirmed the Comprehensive Basic Medical Sciences Examination in 2000 met the relative highest standards of reliability and validity. Item analysis could be of help to improve the quality of examination

Citations

Citations to this article as recorded by  Crossref logo
  • Assessment of the capacity of ChatGPT as a self-learning tool in medical pharmacology: a study using MCQs
    Woong Choi
    BMC Medical Education.2023;[Epub]     CrossRef
  • The Relationship between Academic Achievements and Curricular Changes on Anatomy Based on Basic Medical Education Examination
    Hyo Jeong Hong, Sang-Pil Yoon
    Korean Journal of Physical Anthropology.2016; 29(3): 105.     CrossRef
  • Outcome-based self-assessment on a team-teaching subject in the medical school
    Sang Pil Yoon, Sa Sun Cho
    Anatomy & Cell Biology.2014; 47(4): 259.     CrossRef
  • Item Analysis of Clinical Performance Examination Using Item Response Theory and Classical Test Theory
    Hyun-Sun Lim, Young-Mee Lee, Duck-Sun Ahn, Joon-Young Lee, Hyung Im
    Korean Journal of Medical Education.2007; 19(3): 185.     CrossRef
  • 5,444 View
  • 47 Download
  • Crossref
The Study on the Validity and Reliability of an Instruction Evaluation Questionnaire
Su Jin Chae, Kee Hyun Chang, Heung Sik Kang, Woo Sun Kim
Korean J Med Educ 2002;14(2):287-292.
DOI: https://doi.org/10.3946/kjme.2002.14.2.287
PURPOSE
The aim of this study was to analyze the instruction evaluation questionnaires that have been used by the Department of Radiology, Seoul National University College of Medicine, 2001, and to determine a means to improve them.
METHODS
This study used (1) a factor analytic technique to identify the instructional factors that determine the correlation between all pairs of the evaluation items. (2) A procedure known as Cronbach's alpha, which allows the internal consistency of the instruction evaluation items to be estimated. (3) A correlation method to compare the instruction evaluation score results among the students, residents, peer faculties and the faculty itself.
RESULTS
The results were summarized as follows: First, the Instruction Evaluation Questionnaire included 12 items, 6 comprising a 'teaching method connected with the lecturer' factor and the other 6 a 'teaching method connected with the teaching resource' factor. Second, the Cronbach's alpha index was found to be 0.91s. This indicates the high reliability of the items of the questionnaire for the instruction evaluation. Third, it was found that the correlation between the evaluators was very low (r=.345). In particular, the average score of the peer faculties was 3.33, which was lower when compared with the average score of other evaluators.
CONCLUSION
There is no concordant opinion regarding the evaluation items that should be used in an instruction evaluation in a Medical College. However, the instruction evaluation items should consider various factors connected with the teaching and learning activity. There is a need to elaborate on the contents of the evaluation items in order that the instruction evaluation items be more reliable and have a greater validity.

Citations

Citations to this article as recorded by  Crossref logo
  • A Trend Study of Student' Consistent Responses to Course Evaluation
    Su-Jin Chae, Ki-Young Lim
    Korean Journal of Medical Education.2009; 21(3): 307.     CrossRef
  • Experience and Consideration on Online Course Evaluation by Medical Students
    So Dug Lim, Jongmin Lee, Hyung Seok ParK, Jae-Ran Yu, Kyung Yung Lee, In Sook Sohn, Ran Lee
    Korean Journal of Medical Education.2008; 20(4): 367.     CrossRef
  • 4,971 View
  • 30 Download
  • Crossref
PURPOSE
The purpose of this study was to examine the classification of validity and predictive validity of accreditation standards for medical schools.
METHODS
In order to analyze the validity of the standards of accreditation, an instrument, survey on the input and the output indicators in medical education, was developed to gather information. Cluster analysis, and regression analysis were performed in analyzing the data in order to examine the classification validity, and predictive validity of the standards of medical school accreditation.
RESULTS
The result of this research was as follows; First, Korean medical schools can be classified into seven types according to the amount of invested resources and the educational outcome. The result showed that the standards of medical school accreditation have validity in the schools of lead type, average type, and unconcerned type. Therefore, applying uniform standards to all different types of schools is not validity in enhancing the quality of medical education and in insuring that the medical schools to carry out their social accountability. Second, variables predicting the passing rate of the Korean medical licensing examination (KMLE) were found out to be the college's quota of student, the number of journals per student, and the Korean Scholastic Achievement Test(KSAT). Variables predicting the level of students' satisfaction were the total lecture time, the educational facility per student, and the KSAT. The standards of accreditation accounts for 54.2% in predicting the passing rate of the KMLE and 84.4% in predicting the level of students' satisfaction.
CONCLUSION
Such findings lead to conclude that new standards of medical school accreditation comprehensively including more predicting variables and outcome variables needs to be developed.

Citations

Citations to this article as recorded by  Crossref logo
  • Accreditation science—the need for evidence to guide the global expansion of medical education accreditation
    Sean Tackett, Mohammed Ahmed Rashid, Cynthia Whitehead, David Rojas, Roghayeh Gandomkar
    Medical Teacher.2026; : 1.     CrossRef
  • Changes in the accreditation standards of medical schools by the Korean Institute of Medical Education and Evaluation from 2000 to 2019
    Hyo Hyun Yoo, Mi Kyung Kim, Yoo Sang Yoon, Keun Mi Lee, Jong Hun Lee, Seung-Jae Hong, Jung –Sik Huh, Won Kyun Park
    Journal of Educational Evaluation for Health Professions.2020; 17: 2.     CrossRef
  • Describing the Evidence Base for Accreditation in Undergraduate Medical Education Internationally: A Scoping Review
    Sean Tackett, Christiana Zhang, Najlla Nassery, Christine Caufield-Noll, Marta van Zanten
    Academic Medicine.2019; 94(12): 1995.     CrossRef
  • Ensuring high‐quality patient care: the role of accreditation, licensure, specialty certification and revalidation in medicine
    John Boulet, Marta van Zanten
    Medical Education.2014; 48(1): 75.     CrossRef
  • Analysis on the Performance and Tasks of Accreditation System for Medical Colleges
    Eun Bae Yang
    Journal of the Korean Medical Association.2008; 51(6): 586.     CrossRef
  • 5,255 View
  • 53 Download
  • Crossref
The purpose of this study was to examine the content validity and factor validity of accreditation standards for medical schools. In order to analyze the validity of the standards of accreditation, a questionnaire, assessment survey on the current situation of medical education and standards of accreditation, were developed to gather information. And 1,492 students and faculty were sampled out of 41 medical schools, and the data collected from 662 students and faculty were used for the final analysis. The result of this research was as follows; First, the standard of medical school accreditation has content validity. All of the standards are significant in the range of 3.59~4.49, and the level of recognition of the importance of each standard differed depending on the position of the faculty, student, and their previous experience in the evaluation of education. Therefore, a new standards of accreditation that reflects these differences among groups should be developed. Second, the standard of medical school accreditation is composed of six hidden factors. Each factor has different level of importance and there is a correlation among each factors. Therefore, each factor's level of importance and the relationship between the factors should be considered when developing a new standards of accreditation.

Citations

Citations to this article as recorded by  Crossref logo
  • Accreditation science—the need for evidence to guide the global expansion of medical education accreditation
    Sean Tackett, Mohammed Ahmed Rashid, Cynthia Whitehead, David Rojas, Roghayeh Gandomkar
    Medical Teacher.2026; : 1.     CrossRef
  • Clinicians’ perspectives on quality: do they match accreditation standards?
    Nesibe Akdemir, Romana Malik, Theanne Walters, Stanley Hamstra, Fedde Scheele
    Human Resources for Health.2021;[Epub]     CrossRef
  • Describing the Evidence Base for Accreditation in Undergraduate Medical Education Internationally: A Scoping Review
    Sean Tackett, Christiana Zhang, Najlla Nassery, Christine Caufield-Noll, Marta van Zanten
    Academic Medicine.2019; 94(12): 1995.     CrossRef
  • Accreditation of Medical Education Programs: Moving From Student Outcomes to Continuous Quality Improvement Measures
    Danielle Blouin, Ara Tekian
    Academic Medicine.2018; 93(3): 377.     CrossRef
  • Ensuring high‐quality patient care: the role of accreditation, licensure, specialty certification and revalidation in medicine
    John Boulet, Marta van Zanten
    Medical Education.2014; 48(1): 75.     CrossRef
  • The importance of medical education accreditation standards
    Marta van Zanten, John R. Boulet, Ian Greaves
    Medical Teacher.2012; 34(2): 136.     CrossRef
  • 4,937 View
  • 39 Download
  • Crossref
It is not well known in Korea if the entrance examination score has any predictive validity on the graduation score and national licensure examination. In addition, the gender effects of the three scores were investigated. The study was conducted using two years data of the three scores. The three scores from students of class of 98 and 99 were collected. Students who could not complete the medical education in four years were excluded. Also students who could not pass the national licensure examination were excluded as well. Correlations among the three scores were calculated and gender effect was examined by t- test. For the statistical process, SPSS 9.0 was used. The correlation between the scores of the entrance examination and graduation examination is not significant at 5% level. The correlation between the scores of the entrance examination and national licensure examination is not significant at 5% level. The correlation between graduation examination and national licensure examination;0.635 is highly significant at 0.1% level. The score difference between male and female at the entrance examination was not significant at 5% level. The score difference between male and female at the graduation examination and national licensure examination was highly significant at 0.1% level.

Citations

Citations to this article as recorded by  Crossref logo
  • Relationship of academic achievement and residency training according to admission factors in dental school
    Seungwon Song, Minje Lee, Hoi-Jeong Lim
    Journal of Korean Academy of Oral Health.2022; 46(4): 161.     CrossRef
  • Correlation between academic achievements and admission criteria at the School of Dentistry, Chonnam National University
    Han-Joo Jung, Eun-Ju Lee, Min-Seok Kim
    Oral Biology Research.2019; 43(4): 289.     CrossRef
  • Factors Affecting College Adaptation and Academic Achievement in Nursing Students
    Mi Hyun Han
    Journal of Health Informatics and Statistics.2017; 42(1): 36.     CrossRef
  • Student selection factors of admission and academic performance in one medical school
    Keunmi Lee, Taeyoon Hwang, So young Park, Hyoungchul Choi, Wanseok Seo, Philhyun Song
    Yeungnam University Journal of Medicine.2017; 34(1): 62.     CrossRef
  • Exploration of examinees’ traits that affect the score of Korean Medical Licensing Examination
    Mi Kyoung Yim
    Journal of Educational Evaluation for Health Professions.2015; 12: 5.     CrossRef
  • Correlation of Academic Achievements with Cognitive Admission Variables and Demographics at Chungbuk National University Graduate Medical School
    Sang-Jin Lee, Woong Choi, Seok Yong Kim, Jae-Woon Choi
    Korean Journal of Medical Education.2009; 21(1): 59.     CrossRef
  • The Relationship between Senior Year Examinations at a Medical School and the Korean Medical Licensing Examination
    Ki Hoon Jung, Ho Keun Jung, Kwan Lee
    Korean Journal of Medical Education.2009; 21(1): 17.     CrossRef
  • Academic Motivation, Academic Stress, and Perceptions of Academic Performance in Medical Students
    Doehee Ahn, Gwihwa Park, Kwang Jin Baek, Sang-In Chung
    Korean Journal of Medical Education.2007; 19(1): 59.     CrossRef
  • Grades of Science and Non-science Courses of Medical Students Graduating from Different Types of High School
    Won Il Park, Soo Kyoung Jun, Min Seung Jung
    Korean Journal of Medical Education.2007; 19(2): 101.     CrossRef
  • 6,029 View
  • 48 Download
  • Crossref
This study is related to analysis the validity and reliability of double major data for admission in college of medicine yonsei university. The 69 applicants and 37 of those who had been admitted were sampled and then the GPAs of previous major, the interview scores and the GPAs of medical college were used as sources of analysis. This study is estiamted descriptive statistics, concurrent-related evidence of validity between GPAs previous college and interview scores, predictive-related evidence of validity of GPAs of previous college and interview scores, inter-scorer reliability of interview scores. The results of this study are like this: First, all of 69 students applied and 37(53.6%) applicants were admitted. Eleven students who were admitted graduated bio-chemical department of the college of science. The percentile rank of learning achievement of successful candidates show 64.5~98.2(1995), 43.6~86.6(1996), 22.8~96.9(1997). Second, the result of the estimation of the concurrence-related evidence of validity appear 0.729(1994), 0.673(1995), 0.562(1996), 0.876(1997). Therefore the candidates who got high GPAs also took high interview scores. Third, the predictive-related evidence of validity show insignificant. Forth, generalizability of inter-scorer reliability about intervew scores appear 0.972(1994), 0.983(1995). To improve the validity and relibility double major data, interview skills and educational programs has to be reoriented.
  • 3,635 View
  • 22 Download
The korean society of otolaryngology has had and experience on intraining examination since 1992. We also had the fortieth annual board examination for specialist in 1997. But we have no evidence on the validity of these tests yet. The aim of this study is to examine the validity of the intraining examinations as a tool of formative evaluation, to present a personal progress index demonstrating constructive validity, and to examine the validity of the board examination as a tool of summative evaluation. We did statistic analysis on the consecutive personal scores of 1995 and 1996 intraining examinations, and 1997 written and oral board examinations. Analysis of the averages, standard deviations, distribution curves, and Wilcoxon singed rank test on the scores of 1995 and 1996 intraining examinations demonstrated the constructive validity. Chi-square test revealed that those who had low scores in intraining examinations of two consecutive years had low scores in 1997 board examinations and personal progress index demonstrated the predictive validity. Correlation and linear regression analysis demonstrated a strong correlation between 1997 written and oral board examination. Analysis of the averages, standard deviations, distribution curves, and Spearman rank correlation coefficient revealed that 1997 written board examination had higher concurrent validity than the that of oral examination.

Citations

Citations to this article as recorded by  Crossref logo
  • Predictive Value of the Korean Academy of Family Medicine In-Training Examination for Certifying Examination
    Jung-Jin Cho, Ji-Yong Kim
    Korean Journal of Family Medicine.2011; 32(6): 352.     CrossRef
  • Correlation of In-training Examination Score with the Residency Program or the Score of the Board Examination of Laboratory Medicine
    Jungwon Huh, Jongwan Kim, Jongwoo Park, Hyunok Kim
    Annals of Laboratory Medicine.2006; 26(3): 227.     CrossRef
  • 4,894 View
  • 31 Download
  • Crossref
The methodological and statistical validity of 382 original articles published in the Journal of the Korean Medical Association, from January 1980 to December 1989 was reviewed by the author-devised c heck list consisting of 21 items (14 items for methodological validity and 7 items for methodology, and of 297 articles using statistical analyses a total of 290 articles (97.6%) were found to contain at least one error in statistical methods used. The mean and standard deviation of 'validity score of one article', defined as the total number of valid items devided by the total number of applicable items and then multiplied by 100, were 43.8 and 15.2, respectively. The distribution of validity score was as follows ; over 60 (57 articles, 14.9%), 30 to 59 (266 articles, 69.6%), and under 30 (59 articles, 15.5%). The proportion of articles, of which validity score was over 60, was significantly higher in descriptive study (19.4%) than in analytic study (8.4%, p = 0.003). Also the articles of over 60 in validity score were more frequent in survey (15.9%) than in experiment (8.2%), and in cross-sectional study (16.8%) than in longitudinal study (10.6%), but this finding was not statistically significant. The averaged validity score of two year period was highest in 1984-1955 (50.24), and lowest in 1986-1987 (38.85). There was no significant time trend of the averaged validity score over 10 years(p>0.1). These results suggest that medical articles published in Korea, 1980-1989, were short of their expected quality, and there have been no evidence of improvement with time. It is concluded that a basic training in biostatistical methods in the medical postgraduates and residencies, more consultation of medical investigators with statistician or other experts, and careful review by someone knowledgeable in biostatistics or research design before accepting a manuscript are needed. In addition, refutation should be allowed for the controversial point through the journal.

Citations

Citations to this article as recorded by  Crossref logo
  • Avoiding negative reviewer comments: common statistical errors in anesthesia journals
    Sangseok Lee
    Korean Journal of Anesthesiology.2016; 69(3): 219.     CrossRef
  • Escape from omnishambles in statistics: back to the basics
    Sangseok Lee
    Korean Journal of Anesthesiology.2015; 68(5): 431.     CrossRef
  • Statistical Trends in Family Medicine Journals
    Hae-Jin Kwon, Yong-Gyu Park
    Korean Journal of Family Medicine.2012; 33(1): 9.     CrossRef
  • 5,469 View
  • 22 Download
  • Crossref
It is necessary for medical readers or reviewers to assess critically the methodological and statistical validity of medical articles before accepting their results or conclusion. The authors develope d a validity-assessing checklist of 21 items. Among them 14 items for methodological validity included the followings : clear statement of research hypothesis or specific aims, suitable focus, definition of study population (or subjects), eligibility criteria, exclusion criteria, appropriateness of samples, description of methods in detail, desconclusion of accuracy, description of reliability, presence of control, susceptability bias, performance bias, detection bias, transfer bias. The last 5 items are applicable only to analytic study. And 7 errors to statistical validity : incomplete description of basic data, statistical test performed yet not defined, incomplete description of power or confidence interval, inadequate description of measures of central tendency or dispersion, incorrect analysis, multiplicity on hypothesis testing, unwarranted conclusion. The first 3 items are 'errors of omission', and the other are 'errors of commission'. The authors suggest the checklist be very helpful, but not perfect. Critical mind is needed, which enables someone to distinguish minor errors from major fallacies.

Citations

Citations to this article as recorded by  Crossref logo
  • Nonparametric statistical tests for the continuous data: the basic concept and the practical use
    Francis Sahngun Nahm
    Korean Journal of Anesthesiology.2016; 69(1): 8.     CrossRef
  • Analysis of Statistical Methods and Errors in the Articles Published in the Korean Journal of Pain
    Kyoung Hoon Yim, Francis Sahngun Nahm, Kyoung Ah Han, Soo Young Park
    The Korean Journal of Pain.2010; 23(1): 35.     CrossRef
  • An assessment of statistical errors of articles in the Journal of Korean Academy of Prosthodontics: Comparison between Korean version and English version
    Dong-Gyu Park, Yong-Geun Choi, Young-Su Kim, Sang-Wan Shin
    The Journal of Korean Academy of Prosthodontics.2009; 47(3): 273.     CrossRef
  • Statistical Errors in Papers Published in the Journal of the Korean Society for Therapeutic Radiology and Oncology
    Hee Chul Park, Doo Ho Choi, Song-Vogue Ahn, Jin Oh Kang, Eun-Seog Kim, Won Park, Seung Do Ahn, Dae Sik Yang, Hyong Geun Yun, Eun Ji Chung, Eui Kyu Chie, Hongryull Pyo, Semie Hong
    The Journal of the Korean Society for Therapeutic Radiology and Oncology.2008; 26(4): 289.     CrossRef
  • 5,350 View
  • 44 Download
  • Crossref