LLM Finetuning

What is finetuning

Why LLM needs finetuning

Types of LLM finetuning

Full finetuning

LLM instruction finetuning

Parameter efficient finetuning (PEFT)

Memory footprint & bottleneck

  • because finetuning is memory-intensive, it has memory bottlenecks.

Catastrophic forgetting

  • = a machine learning model forgets previously learned information as it learns new information
  • it happens because finetuning can significantly increase the performance of a model on a specific task BUT can lead to reduction in ability on other tasks
  • how to avoid it
    • fine-tune on multiple tasks
    • consider parameter efficient finetuning (PEFT)

PEFT techniques

Multi-task finetuning

Finetuning vs Retrieval Augmented Generation (RAG)