Training

Full Fine-Tuning

Quick Answer

Training all model parameters, contrasted with parameter-efficient methods like LoRA.

Full fine-tuning updates all model parameters. It requires the most memory and compute but can achieve best quality. Full fine-tuning is impractical for large models without substantial resources. It risks catastrophic forgetting (degrading pretraining knowledge). Full fine-tuning is worth considering only for specialized domains where parameter efficiency isn't critical. For most practitioners, LoRA or QLoRA are better choices. Full fine-tuning was standard before parameter-efficient methods existed.

Last verified: 2026-04-08

Compare models

See how different LLMs compare on benchmarks, pricing, and speed.

Browse all models →