⚠️ Important Notice:
This guide is specifically for the training of finetuned models which is an outdated feature. We highly recommend training Elements instead. For Element training, please consult this guide.
User trained models are only available in the Legacy Mode of the Image Generation tool. Please note that several features may be unavailable in Legacy Mode such as Presets or Edit with AI.
Training your own image generation model with Leonardo.Ai opens up a world of creative possibilities. Fine-tuning allows for precise customization to your specific style or subject matter, especially useful in fields like game development and concept art. Here’s a guide to help you harness Leonardo.Ai’s model training effectively for the best results:
Key Considerations
Start with a well-curated image dataset that mirrors the diversity of your chosen theme, sticking to a consistent size ratio (e.g., 768 x 768px), to ensure your model can generalise effectively to new scenarios.
To avoid overfitting – which hampers the model's performance on unseen data –incorporate a varied dataset with up to 40 high-quality, watermark-free images to teach the model a wide array of scenarios. (Overfitting causes the trained model to recall / recreate its trained data instead of being flexible enough to adhere to the given prompt).
Consistency in style, format, and aspect ratio are paramount for model recognition efficiency, while introducing variation within these constraints encourages the model to creatively reapply learned elements in novel contexts. (Striking the right balance between variation and consistency may require trial and error).
Consistency - character position, style and image composition.
Variation - characters themselves and their clothes.
Bad Dataset ❌
Good Dataset ✅
By focusing on these considerations, you’re set to optimize your model training journey with Leonardo.Ai, creating customised and consistent outputs for your projects. Now let’s get started!
Step-by-Step Training Guide:
Step 1: Create a Dataset
From the home page, navigate to Models & Training then click
Train New Model
.
In the Select Category section, select the category of your choice. (Note: This is critical only for Element training and the choice does not affect the outcomes of Finetuned Model training). Press
Next
to continue.
You will be taken to the Create Dataset section, else click on
Create New
the top right. Upload your image and name the dataset. Once done, pressNext
.
In the Start Training section, enter a name and description for the model and include a trigger word, which will be entered into the prompt to help initialize the model.
Once done with the Model Details, unfurl the Advanced Settings and switch the Type to
Finetuned Model
. You may also opt to switch the Base Model if desired. Once done, clickStart Training
to initiate the training.
Note: You will be notified via email once the training process is complete. (It is typically 30 minutes to 2 hours depending on complexity). When done the model will be available under Models & Training > Your Models.
Step 3: Generate Images With Your Model
From the Homepage, navigate to Image Generation > Click on Legacy Mode in the top right > Click on the Model button > Click Select Other Model > Click on Your Models.
Once you are in the Your Models category, hover over the model you want and click the View button, then click the Generate with this Model button.
Alternatively, you may do so by going to the Finetuned Models page then click on Your Models , then hover over the model you want and click the view button, then click the Generate with this Model button.
This automatically opens the Image Generation tool in Legacy Mode with the model already selected for use.
Note: In order to use the new features of the Image Generation tool such as Presets and Quality mode, you will have to disable the Legacy Mode toggle in the top right. Do note that Image Generation V2 however does not support user trained models.
2. Click on your newly trained model. Then click Generate with this model. Note: The image preview for the newly trained model only shows once you have done your first generation with it.
3. Type your desired prompt and generate images.
4. Observe how the generated images capture the essence of the trained images, aligning with the style and preferences of your dataset. If the results are not satisfactory, you can retrain a new model by going to Training & Datasets, choosing your dataset and selecting Edit Dataset. You will be able to delete and replace images and then train another model with the updated dataset.
ℹ️ Note: It is not possible to update an existing model that has already been trained due to technical limitations. This essentially means that every time a dataset has been modified, a new model will have to be trained to reflect the changes made.
5. Note that you can delete any models you have created by first going to Finetuned Models > Your Models. Then simply hover the cursor over the model you would like to delete and choose Settings > Delete this Model.
Final Considerations:
It is important to note that enabling Alchemy can drastically increase the quality of the generation output, depending on the model. In addition, since the training models are based on older versions of Stable Diffusion, typically detailed prompts with more quality and style modifiers will help produce better outcomes. Finally, it is important to note that you can use Elements and Image Guidance with the created fine-tuned model, just like the regular platform models.
And that does it for our in-depth Fine-tuned Model Training guide - we hope you found it useful! Remember, we're always adding new features and enhancing old ones, so be sure to check back in here from time to time to see updates or new ways of training models.
Happy Prompting! 🎨
If you have any questions or need further assistance, please reach out to Support via chat or email support@leonardo.ai
If you have large datasets and are interested in partnering on a custom model strategy for your company, please reach out to bizdev@leonardo.ai.
Frequently Asked Questions:
How to make my custom model private?
Custom models are always private, regardless of whether you are a Free user or a Paid subscriber.
How do I locate my finetuned model within the Image Creation tool?
To locate your finetune model in the Image Creation tool:
Click on the Legacy Mode toggle in the top right corner
Under the prompt input, click on the Finetuned Model box to open the drop down and click Select Other Model
In the popup, click on Your Models, then press View when hovering your cursor over the desired model.
Press Generate with this Model in the following model details popup.
What are the differences between training a custom Finetuned model and custom Element?
Finetuned Models utilize an much older training method and Base Models (Stable Diffusion 1.5 and 2.1)
Elements utilize up to date methods of training, particularly for Flux. They can be based on most available SDXL models or Flux Dev.
As custom finetuned models are based on the older Stable Diffusion models, they can only be utilized within the Legacy Mode of the Image Creation tool, which uses an older generative pipeline.
Elements are usable on both Legacy Mode and Classic Modes, and can be paired with different SDXL models if based on SDXL. Generations are more refined and detailed especially on higher resolutions in comparison to custom finetuned models thanks to the newer generative pipeline and model bases.
Element training offers specially built training pipelines for the 3 categories (Style, Character and Object). This offers a much better means of creating specifically tailored Elements for your various requirements.
Custom Elements can be mixed with other Elements, including custom Elements, and can also be dialed down to reduce the effect on the output image. Elements are used on top of the base Model (SDXL / Flux) (acting as a way to add additional specific knowledge on top of the base Model). This allows you to have a greater flexibility on the outcomes of the images.
Custom Finetuned Models on the other hand have less flexibility as they are the underlaying model used in the generative process. It can be noted that they can still be paired with the legacy platform Elements, however this still lacks the flexibility offered by custom Elements.