Fine-tuning a GPT — LoRA

Chris Kuo/Dr. Dataman
18 min readJun 19, 2023

This post explains the proven fine-tuning method LoRA, the abbreviation for “Low-Rank Adaptation of Large Language Models”. In this post, I will walk you through the LoRA technique, its architecture, and its advantages. I will present related background knowledge, such as the concepts of “low-rank” and “adaptation” to help your understanding. Similar to “Fining-tune a GPT — Prefix-tuning”, I cover a code example and will walk you through the…

--

--

Chris Kuo/Dr. Dataman

The Dataman articles are my reflections on data science and teaching notes at Columbia University https://sps.columbia.edu/faculty/chris-kuo