
Generative artificial intelligence has significantly boosted workplace efficiency across industries, but ensuring transparency in data processing has emerged as a critical challenge. Growing calls demand that companies and institutions disclose how they use personal information in easily understandable terms, including the scope of data utilization and retention periods.
The Personal Information Protection Commission (PIPC) held a consultation meeting on Wednesday at the Korea Press Center in Seoul to discuss improvements to privacy policies in the generative AI sector. Major domestic and international AI companies participated, including Naver, SK Telecom, Kakao, OpenAI, and Google.
Participants shared findings from last year's privacy policy evaluations and examined key issues including how prompt input data is processed and used for training, clarification of legal grounds for data processing, and alignment with global policies. Companies explained that generative AI's technical characteristics make data processing structures complex and coordination with global headquarters policies challenging. However, they agreed that clearer, more accessible explanations are necessary to build user trust.
Attendees called for greater specificity to help users intuitively understand whether input data is used for training, retention periods, and opt-out procedures.
The PIPC identified deficiencies in adequacy, readability, and accessibility across the generative AI sector. Some services used vague terms such as "partners" or "service providers" when disclosing third-party data sharing, failing to clearly identify recipients. Others listed personal information categories too broadly, omitted specific legal grounds for processing, or expressed data retention periods ambiguously.
The PIPC has conducted privacy policy evaluations since 2024, targeting major services that utilize emerging technologies such as AI and autonomous driving or process large volumes of sensitive personal information. The first evaluation in 2024 covered seven sectors including big tech, online shopping, online platforms, hospitals, OTT services, entertainment, and AI recruitment, which averaged just 57.9 points. After the commission strengthened guidance through evaluation manuals, revised guidelines briefings, and corporate consultations, average scores rose to 71 points last year across seven sectors including connected cars, edutech, and smart homes.
Based on discussions from this meeting, the PIPC plans to supplement guidelines to help generative AI companies draft clearer privacy policies. The commission will publish revised "Privacy Policy Drafting Guidelines" next month and hold briefing sessions to help companies and institutions understand and implement the updated standards.
