By using our website, you agree to the collection and processing of your data collected by 3rd party. See GDPR policy
Compact mode

QLoRA (Quantized LoRA)

Combines quantization with LoRA for ultra-efficient fine-tuning of large models

Known for Memory Efficiency

Core Classification

Industry Relevance

Basic Information

Historical Information

Technical Characteristics

Evaluation

Facts

  • Interesting Fact 🤓

    Fascinating trivia or lesser-known information about the algorithm
    • Enables fine-tuning 65B models on single consumer GPU

FAQ about QLoRA (Quantized LoRA)

Contact: [email protected]