By using our website, you agree to the collection and processing of your data collected by 3rd party. See GDPR policy
Compact mode

LoRA (Low-Rank Adaptation)

Parameter-efficient fine-tuning technique for large language models using low-rank matrix decomposition

Known for Parameter Efficiency

Core Classification

Industry Relevance

Basic Information

Historical Information

Technical Characteristics

Evaluation

Facts

  • Interesting Fact 🤓

    Fascinating trivia or lesser-known information about the algorithm
    • Can reduce fine-tuning parameters by 99% while maintaining 95% performance

FAQ about LoRA (Low-Rank Adaptation)

Contact: [email protected]