LoRA stands for Low-Rank adaptation and the term is often used interchangeably with Custom Models.
In GenAI, fine tuning a pre-trained model to perform within a specific style framework or with a specific product often requires significant computational resources. Traditionally this involves re-training the entire model, which can be slow and costly.
Low-Rank adaptation or LoRAs provide a more efficient solution by allowing you to add a smaller, focused adjustment to the underlying model without requiring retraining. They work by allowing you to create and insert small, trainable elements into the existing model. These adjustments, or 'custom models', fine-tune the model for various style or product-specific uses while keeping the core structure and underlying model intact.
This approach is significantly more efficient - instead of re-training the whole model, the LoRA can focus in on what's needed for a specific task - e.g. generating content in a particular style, while still benefitting from the underlying model's core attributes.
β
As of now, Pencil supports the creation of LoRAs with the AI model Bria.
Note: Pencil currently limits each workspace with one style LoRA and one product LoRA. We intend to lift this limitation in future updates.