Skip to main content
All CollectionsHelp Guides
Element (LoRA) Training
Element (LoRA) Training

Feature Guide: Training your own Element for more unique images.

Ayumi Umehara avatar
Written by Ayumi Umehara
Updated in the last 15 minutes

Elements (also known as LoRAs) consistently apply unique aesthetics to generated images. Creating your own Elements offers greater flexibility and control when creating content that requires specific styles.

The trained Element can be used in the Image Creation tool, independently or combined with other Elements and Image Guidance options. It is a more reliable way to apply a style or aesthetic compared to Style Reference, especially if the Dataset is well-curated. Elements are now trained with Flux Dev as the default base model.

This guide explores how to train an Element and assemble a good-quality Dataset for the best results.

*More Flux models coming soon. Flux Elements are not compatible with Flux Schnell.


Step 1. Curating your Dataset

The first step in Element training is assembling a good Dataset. With Leonardo.Ai, you can train a Dataset on:

  • Styles - Your own artworks, impressionist style paintings, etc.

  • Objects - Products, clothing, furniture, etc.

  • Characters - Images of yourself, generated or drawn characters, etc.

    • Note: The Character setting for Flux Elements is optimised for realistic characters and actual people. For stylized characters, we recommend using SDXL, which allows consistent representation of the character’s outfit.

Tips and requirements for curating a good Dataset:

  • Stick to a consistent style, format, and image size (ideally 1024 x 1024px).

  • Minimum images: 10, though more images are recommended to ensure the style is properly learnt.

  • Maximum images: 50 for Flux Dev, 40 for SDXL (Advanced Settings).

  • Ensure images are high quality and free of any unwanted visuals or aesthetic qualities (such as pixellation).

  • Ensure images are cropped to feature the subject matter (close crops to show details such as texture can be included).

  • Avoid mixing art styles to avoid unpredictable outputs. Similar-looking styles can be added for versatility and a custom look.

  • Avoid repetitive subjects and backgrounds (unless intentional).

  • Images should feature a variety of subjects in the desired style.

    • This is essential to teach the model a wide array of scenarios, so it can creatively reapply the Element in new contexts with subject matter not found in the Dataset.

    • It also prevents overfitting, where the Element simply recalls or recreates its trained data, instead of being flexible enough to adhere to the given prompt.

Good Dataset (Style)

Here’s a Dataset with diverse range of subject matter in various backgrounds and lighting scenarios in the same style (in this case, moody analog photography with a teal-blue tint). This will give enable the Element to generate various images of different subjects in variety of settings in a similar aesthetic.

Bad Dataset (Style)


A small Dataset with limited range of subject matter and setting. The Element will be more biased to output images similar to what was in the Dataset (in this case, a female fashion model in neo-rococo style fashion) and also more inclined to use sky or solid-color backgrounds due to limited knowledge (however, this outcome may be acceptable depending on your requirements).

Good Dataset (Character)

To train for likeness, a Dataset should mostly include photos cropped closely to the face - ideally 8-12 images in different lighting conditions, angles, styles and expressions. Varied hairstyles, makeup and clothing are acceptable and can be useful for diverse appearances, but avoid similar backgrounds for all photos unless it's a deliberate choice.

It's also best to include a few medium shot and full body images to prevent overfitting in terms of image composition. This is especially useful if training an Element on a real person's likeness - if trained with only close up images, the Element may generate an inaccurate depiction of their body shape, height, etc., due to lack of knowledge of what they look like.

Bad Dataset (Character)


Here’s a Dataset that lacks enough close ups of the face in different angles and lighting scenarios, and has similar background, lighting, colors or composition in some of the images. Images may also include more than one face.

This Dataset will result in an Element that lacks flexibility in accurately representing the face in different angles and lighting scenarios. Too many similar looking images can also easily lead to overfitting, reducing the creativity of outputs.

Remember: The diversity of source images is key to a flexible Element.

Step 2. Creating and training your Dataset

  1. From the home page, navigate to Models & Training and click Train New Model to get started.


  2. Select the type of Element you would like to train - either Style, Character or Object. This is important to ensure the best results. Press Next to continue.


  3. Select a dataset you had previously created. If you have not created one yet, you will be prompted to create a new dataset directly within the pop-up. Press Next to continue once you are done with either selecting an dataset or creating a new one.


  4. In the following step, you will be prompted to enter several details for the Element. This includes:

    Name: This will be the name of the Element
    Description: This will be the description text that is displayed below the name.

    Trigger Word (if applicable): When the Element is applied, this word will be included by default in prompting and helps the AI recall specific patterns or styles its learned. It should ideally be abstract to avoid confusion with something not in the base model.


    E.g if you’re training an Element to create images of a blue cartoon superhero cat, ‘bluecatxhero’ would be preferable to ‘blue cat’, as the AI may confuse it with other blue or random cartoon cats it already knows.

    Note: Avoid using characters such as spaces and _ in the trigger word as this may cause unwanted results in the images.

  5. Click Start Training to begin training your Element. This may take anywhere from 30 mins to a few hours, depending on the number of images and settings. You will receive an email once it is ready. You may also check the Job Status by navigating to the Job Status tab.

⚠️ Important Notice: The default Base Model for Element training is Flux Dev and is not compatible with other models including Flux Schnell. This can be switched in the Advanced Settings section.

Step 4. Using your newly trained Element

  1. Navigate to the Image Creation page. Directly under the prompt bar, click on Classic mode.

  2. Click the icon in the left of the prompt bar. Under the Platform Elements section, click View more and navigate to the Your Elements tab in the following popup.

  3. Select your Element, click confirm, and you’re good to go! Enter a prompt and watch as the Element is applied to your generations.

Leonardo’s Tips: Try experimenting with the strength of the Element for varied results. As the default strength may be too high depending on the trained Element, it is best to adjust it accordingly.

Reducing the strength will usually led to more diverse creativity but potentially at the expense of things such as character consistency.



Advanced Settings

While we recommend sticking to the default settings, you may like some more control.

In the pop up that appears when you hit Start Training, click on Advanced Settings. The following options will appear:

Base Model: Defaults to Flux Dev. This option determines which model is used as a base for the Element. This will affect the compatibility of the Element with the various models. (Eg. SDXL based Elements can only be paired with SDXL models.)

  • Flux Dev is an excellent option for photorealism, cinematic image, prompt adherence/coherence and character likeness.

  • SDXL (Notably Stable Diffusion 1.0) is a good general option with a versatile knowledge of subject matter and styles. SDXL is particularly great for training stylized characters on.

  • Avoid using Lightning XL for Element training, which has smaller overall knowledge compared to SDXL and thus doesn't pair well with other models.

Train Text Encoder: Enabled by default. This relates to the part of the model that interprets and understands prompts and visual descriptions.


Frequently Asked Questions

Will an Element automatically update if I add / remove image(s) from its Dataset?


Changes to a Dataset will not affect any existing Elements. Any modifications to the Dataset would require a new Element to be trained.

Can I delete an Dataset after training an Element?


Yes. A Dataset can be safely deleted with no effect on the Element that was trained on it.

There are many visual issues when using my Element

  • Check if the Dataset is free of any unwanted visual issues such as pixellation, overly sharpened edges or lens artifacts (if unintentional). If that is not the case, reduce the strength of the Element. Extremely high strengths can result in unwanted visual issues and glitches in the image.

  • User trained Elements may be potentially too strong at the default strength setting and should be adjusted accordingly if results have visual glitching.

I want to generate the character in an completely different outfit or setting but the character Element is not giving outputs that adhere to the prompt at all


This may be caused when the character Element is overfitted, especially on Elements for Flux. We recommend reducing the number of images in the Dataset if there are more than 8 and train an new Element.

Did this answer your question?