LoRAs are undoubtably a very efficient way of honing in on a particular style or generally 'fine-tuning' a model. However, they are not without their limitations. Here is a brief summary of both their advantages and their limitations.
Advantages
Efficiency: LoRAs significantly reduces the computational cost of fine-tuning large models. By customising a relatively small number of parameters, you can benefit from faster training and lower resource consumption compared to full model fine-tuning.
Flexibility: You can train multiple LoRAs with the same underlying model for different tasks - e.g. multiple LoRAs trained with different styles. The ability to create and switch between different LoRAs quickly can enhance the creative process. [Note: Pencil currently limits each workspace with one style LoRA and one product LoRA. We intend to lift this limitation in future updates.]
Preservation of the Base Model: The original, underlying model remains unchanged, simply serving different use cases.
Limitations
Scope of Adaptation: LoRAs work best for fairly narrow fine-tuning, e.g. a very specific style or a single product. They are less well suited for tasks that require substantial changes to the model's understanding or capabilities.
Limited by Base Model Capacity: Since LoRAs rely on the underlying model’s existing architecture and capabilities, they can only stretch so far. If the base model lacks understanding in a domain, a LoRA cannot fully compensate.
Don't work alone: LoRAs still require a big base model in order to work - they are add-ons and don't function as a standalone