Session 17: Transfer Learning
transfer learningfine-tuning

Session 17: Transfer Learning

Presenters

Aein Koupaei

Transfer Learning

The AI Talks presentation on Transfer Learning explores how models trained on one task can be repurposed for new, related tasks, offering improved data efficiency and faster training. It begins with an introduction to machine learning and deep learning, highlighting their limitations, especially regarding data scarcity and accuracy challenges.

The core of the presentation defines transfer learning and explains its advantages: it reduces the need for large labeled datasets, accelerates training, and enhances model performance across domains like medical imaging and rare languages. It introduces foundational concepts such as source/target domains and tasks, and outlines different transfer learning approaches including instance-based, feature-based, and parameter-based strategies like full and partial fine-tuning.

Practical applications are showcased in fields like computer vision (e.g., image classification, segmentation) and NLP (e.g., translation, question answering). The talk concludes by emphasizing that transfer learning is now a cornerstone of AI development, enabling high-performing models with fewer resources and opening doors to solving more diverse problems.