Yonsei University · Department of Artificial Intelligence
AI•ISL
Artificial Intelligence and Information Systems Laboratory (AI-ISL) is a deep learning research lab focused on information-theoretic approaches to machine learning problems.
Research focus
Information-theoretic foundations
We focus on the theoretical understanding of diverse deep learning problems, including generative models, privacy and safety, and network compression.
Advancing the theoretical understanding of discrete diffusion language models
Developing machine unlearning methods for large language models
Building safety alignment and agent-level controls for LLM behavior
Applying LLM quantization to make large models efficient and deployable
News
Lab updates
CVPR 2026 Findings Our group has 1 paper accepted: Memorization In Stable Diffusion Is Unexpectedly Driven by CLIP Embeddings.
ICLR 2026 Our group has 3 papers accepted: A2D: Any-Order, Any-Step Safety Alignment for Diffusion Language ModelsRainbow Padding: Mitigating Early Termination in Instruction-Tuned Diffusion LLMsRethinking Benign Relearning: Syntax as the Hidden Driver of Unlearning Failures.
SaTML 2026 Our group has 1 paper accepted: Differentially Private Adaptation of Diffusion Models via Noisy Aggregated Embeddings.
AAAI 2026 Our group has 1 paper accepted: An Information Theoretic Evaluation Metric For Strong Unlearning.
NeurIPS 2025 Our group has 2 papers accepted: SAFEPATH: Preventing Harmful Reasoning in Chain-of-Thought via Early AlignmentInformation-Theoretic Discrete Diffusion.
EMNLP 2025 Main Our group has 2 papers accepted: R-TOFU: Unlearning in Large Reasoning ModelsSEPS: A Separability Measure for Robust Unlearning in LLMs.
ICML 2025 Workshop on Tiny Titans Our group has 1 oral presentation: Preserve then Quantize: Dominant-Subspace Guided Low-Rank Reconstruction.
ACL 2025 Findings Our group has 1 paper accepted: Assigning Distinct Roles to Quantized and Low-Rank Matrices Toward Optimal Weight Decomposition.
ICML 2025 Spotlight Our group has 1 paper accepted: Understanding and Mitigating Memorization in Generative Models via Sharpness of Probability Landscape.
AAAI 2025 Workshop on Privacy-Preserving Artificial Intelligence Our group has 1 paper accepted: Understanding Memorization In Generative Models Through A Geometric Framework.
NeurIPS 2024 Workshop on Statistical Foundations of LLMs Our group has 1 paper accepted: Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios.
ICML 2024 Our group has 1 paper accepted: Improved Communication-Privacy Trade-offs in L2 Mean Estimation under Streaming Differential Privacy.
Lab news ISL moved to the Department of Artificial Intelligence at Yonsei University.
NeurIPS 2023 Workshop on Diffusion Models Our group has 1 paper accepted: LoRA can Replace Time and Class Embeddings in Diffusion Probabilistic Models.
NeurIPS 2023 Our group has 2 papers accepted: Censored Sampling of Diffusion Models Using 3 Minutes of Human FeedbackExact Optimality of Communication-Privacy-Utility Tradeoffs in Distributed Mean Estimation.
ICCV 2023 Workshop on Low-Bit Quantized Neural Networks Our group has 1 oral presentation: Fully Quantized Always-on Face Detector Considering Mobile Image Sensors.
ICML 2023 Workshop on Federated Learning Our group has 1 paper accepted: Exact Optimality of Communication-Privacy-Utility Tradeoffs in Distributed Mean Estimation.
ECCV 2022 Our group has 1 paper accepted: Prune Your Model Before Distill It.
ICML 2022 Our group has 1 paper accepted: Neural Tangent Kernel Analysis of Deep Narrow Neural Networks.
AISTATS 2022 Our group has 1 paper accepted: An Information-Theoretic Justification for Model Pruning.
ICML 2021 Our group has 1 paper accepted: WGAN with an Infinitely Wide Generator Has No Spurious Stationary Points.