
Seoul National University Hospital's Healthcare AI Research Institute announced on the 9th that it has released two proprietary medical AI models as open source to the global community.
The released models are "mvl-rrg-1.0," which analyzes chest X-ray images to generate radiology reports, and "hari-q2.5-thinking," a large language model specialized in medical reasoning. Both models are designed to assist physicians in clinical decision-making, with applications in medical image interpretation and text-based clinical reasoning, respectively.
The achievement was made possible through the Ministry of Science and ICT's "AI Research Computing Support Project." Seoul National University Hospital was selected for the program based on its demonstrated capability to independently develop large-scale medical AI models and publish research at global standards. With support of 64 H200 GPUs (approximately 4 petaflop-class computing performance), the hospital established a high-performance training environment for complex AI models based on large-scale medical data, enabling efficient training and validation of multimodal medical AI models combining text and medical images.
mvl-rrg-1.0 is an AI that automatically generates radiology reports by analyzing chest X-ray images. Beyond single-image analysis, it is designed to link a patient's historical and current images to reflect temporal changes such as disease progression or improvement. Trained on more than 360,000 publicly available medical images to infer patterns of lesion changes, the model achieved ROUGE-L of 34.1 and BLEU-4 of 18.6 in image-only input conditions—performance metrics ranking among the world's best in automated chest X-ray report generation.
The model is expected to reduce the burden of image comparison and interpretation in outpatient clinics and emergency rooms. In clinical settings, it automatically compares past and present images to quantitatively present the degree of lesion changes, helping physicians explain treatment progress to patients. In time-critical situations such as emergency rooms, it can rapidly flag findings requiring immediate intervention, such as pneumothorax, immediately after X-ray capture to assist physicians' initial assessments.
hari-q2.5-thinking is designed to understand clinical situations and perform reasoning necessary for diagnosis and treatment processes. The model previously demonstrated its medical reasoning capability by achieving an 89% accuracy rate on the Korean Medical Licensing Examination (KMLE) practice test. It can assist physicians' thinking processes when treating patients with complex, overlapping symptoms that make identifying the cause difficult. For example, when cough, abdominal pain, and headache present simultaneously, rather than simple symptom classification, the model considers the patient's medical history and clinical records together to provide step-by-step reasoning for the need for additional tests and the basis for differential diagnosis. It can also serve as an educational tool explaining diagnostic logic for medical students and residents. These models are available on the Korea Health Data Platform (KHDP), a National Strategic Technology Specialized Research Institute data platform, and the global AI platform Hugging Face.
Based on these models, Seoul National University Hospital is pursuing expansion into specialty-specific models across 17 clinical departments including internal medicine, surgery, and pediatrics. The hospital is also building a multi-agent system where multiple AI models share roles in assisting clinical decisions. Upon completion of clinical validation, the hospital plans to sequentially release specialty-specific models to expand the medical AI research foundation available to healthcare professionals and researchers both domestically and internationally.
Lee Hyung-chul, Deputy Director of the Healthcare AI Research Institute, said, "High-performance GPU infrastructure has enabled medical AI research that trains and reasons across both text and medical images. The ability to compare past and present images to identify changes in patient conditions is particularly important in actual clinical decision-making, and we expect the models released today to more efficiently assist physicians in their clinical judgments."
