Skip to main content
All CollectionsHelp Guides
Applying unique styles to images with Elements
Applying unique styles to images with Elements

Element Training - Styles

Ayumi Umehara avatar
Written by Ayumi Umehara
Updated over 2 months ago

Overview

This tutorial is part of an series of Element Training guides and aims to cover how to apply unique styles and aesthetics to your generated images consistently by assembling an dataset and training an Element.

The trained Element can be used independently paired with any SDXL model in the Image Creation tool or alongside other Elements and Image Guidance options and is a more reliable way to apply an style / aesthetic in comparison to Style Guidance, especially if the dataset is well put together.



Dataset Assembly

Here are some points to keep in mind when assembling an dataset when training for styles.

A dataset with diverse range of subject matter and settings in the same style - this will give the Element ample amount of variety to be able to generate various images of different subjects and settings.

A dataset with limited range of subject matter and setting - The Element will be more biased to output subjects similar to what was in the dataset (in this case, girls) and also more inclined to use solid color backgrounds due to limited knowledge. (Note: This outcome may be acceptable depending on your requirements.)

  • Versatility and Quantity: Having the style represented in images featuring various subject matter and in a sizable quantity (ideally around 10 images minimum) is essential for creating an more versatile style Element. This allows you to easily generate images featuring subject matter not found in the dataset in the same style. With certain styles, especially artistic 2D styles, an larger quantity is helpful to ensure the style is properly learnt.

    Avoid having repetitive subjects and backgrounds unless it is intentionally something you want recreated.

    Left: Good quality image with clean edges and no compression artifacts. Right: Poor quality image with increased contrast and compression artifacts. Edges are distorted.

  • Quality: As a standard practice, always ensure images in the dataset is of high quality with no compression artifacts and other errors. This will prevent the Element from applying unwanted aesthetic qualities and visual elements to your outputs. The image resolution should ideally be at least 1024px on one side.

    The styles are too diverse and will result in the Element being unpredictable in terms of style output.

  • Consistency: Generally it is best to avoid mixing distinctly different styles to avoid unpredictable outputs. Similar looking styles however can be mixed to add some versatility.

    Bad cropping - Too close to show the overall subject or point of interest. It is best to show as much of the subject or point of interest as possible.

  • Cropping: Images should ideally be cropped in to feature mainly the subject matter. You may opt to include some images extremely cropped in to show certain stylistic details such as texture. For environments and landscape, ideally it is best to crop around the area of interest. An square aspect ratio works best, however an auto-crop will be applied if the image is not an square.


Let's now put together the dataset. You may download the sample dataset to follow along at the end of this tutorial.

  1. To begin, start by navigating to Training & Datasets.

  2. Click on New Dataset  and enter a name and description if you want. Then upload the images.

  3. Once uploaded your dataset should look like this:



Training your dataset

Now that the dataset has been assembled, it is time to train your dataset.

  1. Click on the Start Training button.

  2. Ensure that the Training Type is set to Element (LoRA), then enter the Element name and description (optional).

  3. Set the Category to Style, this will optimize the training process for styles.

  4. Input an Trigger Word, this word will be automatically included behind the scenes when you activate the Element. It is generally best to use abstract words as the trigger word to avoid the AI mistaking it as part of the prompt. For this case we will use hndrwnstyle.

  5. Select the Base Model. For the best versatility, it is usually good to train on Stable Diffusion 1.0 (SDXL) which has versatile knowledge of subject matter and less biased aesthetics wise. You may select other models as well but it is generally not recommended to train on either Lightning XL or Anime XL models.

  6. Once all settings are confirmed, press Start to begin the training process. You will receive an email notification once it is complete.


Using the Element

Now it is time to use the Element to generate some images.

  1. Navigate to Image Creation.

  2. On the left side of the prompt input box, click on the button, then below the Elements category, click View More.

  3. Click on the Your Elements tab to view your trained Elements and select the Element you have trained, then click Confirm.

  4. Once the Element has been added you may click on it's thumbnail to adjust the strength which affects how much influence the Element will have on your output image. (It is best to avoid setting the Element strength to beyond 1.0. In addition, negative values can affect the outcome differently and can be experimented with)

  5. Enter your prompt, select your preferred Preset / Model, change other settings if you wish and then click Generate. (In this case we will use the SDXL 1.0 model with no Preset Style on Fast generation mode in 1:1 aspect ratio)

  6. You have successfully generated an image with your new Element.

💡 Leonardo's Tip: It is generally best to reduce the strength of your style Element when pairing it with the models: Lightning XL (Preset: Leonardo Lighting) and Anime XL (Preset: Anime)

Additional Notes:

  • Elements will behave differently when combined with different models and Preset Styles. It is best to experiment and see which combination of settings works best to create the aesthetic you want.

  • For purity of style, it may be best to switch to Fast generation mode and then upscale the images that you like after.

  • Remember that modifying or deleting your dataset does not affect any Elements trained on it. If you update the dataset, an new Element will have to be trained on the dataset.


Resources

The sample dataset used in this tutorial can be downloaded here:

Did this answer your question?