Skip to main content
All CollectionsHelp Guides
Applying unique styles to images with Elements
Applying unique styles to images with Elements

Element Training - Styles

Ayumi Umehara avatar
Written by Ayumi Umehara
Updated over a month ago

Overview

This tutorial is part of an series of Element Training guides and aims to cover how to apply unique styles and aesthetics to your generated images consistently by assembling a dataset and training an Element.

The trained Element can be used independently paired with either Flux or any SDXL model (Depending on its base model) in the Image Creation tool or alongside other Elements and Image Guidance options. It is an much more reliable way to apply a style / aesthetic in comparison to Style Reference, especially if the dataset is well put together.



Dataset Preparation

Here are some points to keep in mind when preparing a dataset for styles.

A dataset with diverse range of subject matter and settings in the same style - this will give the Element ample amount of variety to be able to generate various images of different subjects and settings.

A dataset with limited range of subject matter and setting - The Element will be more biased to output subjects similar to what was in the dataset (in this case, girls) and also more inclined to use solid color backgrounds due to limited knowledge. (Note: This outcome may be acceptable depending on your requirements.)

  • Versatility and Quantity: Having the style represented in images featuring various subject matter and in a sizable quantity (ideally around 10 images minimum) is essential for creating an more versatile style Element. This allows you to easily generate images featuring subject matter not found in the dataset in the same style. With certain styles, especially artistic 2D styles, an larger quantity and variety of featured subjects is helpful to ensure the style is properly learnt.

    Do avoid having repetitive subjects and backgrounds unless it is intentionally something you want recreated consistently with the Element.


    Left: Good quality image with clean edges and no compression artifacts. Right: Poor quality image with increased contrast and compression artifacts. Edges are distorted.

  • Quality: As a standard practice, always ensure images in the dataset is of high quality with no compression artifacts and other errors. This will prevent the Element from applying unwanted aesthetic qualities and visual elements to your outputs. The image resolution should ideally be at least 1024px on one side.

    The styles are too diverse and will result in the Element being more unpredictable in terms of style output.

  • Consistency: Generally it is best to avoid mixing distinctly different styles to avoid unpredictable outputs. Similar looking styles however can be mixed to add some versatility.

    Bad cropping - Too close to show the overall subject or point of interest. It is best to show as much of the subject or point of interest as possible.

  • Cropping: Images should ideally be cropped in to feature mainly the subject matter. You may opt to include some images extremely cropped in to show certain stylistic details such as texture. For environments and landscape, ideally it is best to crop around the area of interest. An square aspect ratio works best, however an auto-crop will be applied if the image is not an square.



Training your dataset

Now that you have put together images for the dataset, it is time to train your style Element.

  1. On the Home page, navigate to Models & Training. Then at the top, click on the Train New Model button.


  2. In the following popup, select the Style category. It is important that the correct category is selected. Press Next to continue.

  3. If you have never created any datasets, you should be taken to the dataset creation section. Else, click on the Create New button in the top right. Upload the images you have just assembled, enter a name for the dataset and press Next to continue.

  4. Enter the Model Details in the following step and input an Trigger Word, this word will be automatically included behind the scenes when you activate the Element. It is generally best to use abstract words as the trigger word to avoid the AI mistaking it as part of the prompt. For this case we will use hndrwnstyle.

    Avoid using characters such as spaces and _ in the trigger word as that may cause unwanted results in the images.


  5. (Optional) Under Advanced Settings, you may opt to change the Base Model if you do not intend to use the Element with Flux. Do note that it is generally not recommended to train on Lightning XL.

  6. Once all settings are confirmed, press Start Training to begin the training process. You will receive an email notification once it is complete.


Using the Element

Now it is time to use the Element to generate some images.

  1. Navigate to Image Creation.

  2. On the left side of the prompt input box, click on the button, then below the Elements category, click View More.

  3. Click on the Your Elements tab to view your trained Elements and select the Element you have trained, then click Confirm.

    Note: If the Element is not visible, it is possible that the current active Preset / Model is not compatible. You will have to switch to the compatible Model / Preset to continue.

  4. Once the Element has been added you may click on it's thumbnail to adjust the strength which affects how much influence the Element will have on your output image. (It is best to avoid setting the Element strength to beyond 1.0. In addition, negative values can affect the outcome differently and can be experimented with)

  5. Enter your prompt, select your preferred Preset / Model, change other settings if you wish and then click Generate. (In this case we will use the SDXL 1.0 model with no Preset Style on Fast generation mode in 1:1 aspect ratio)

  6. You have successfully generated an image with your new Element.

💡 Leonardo's Tip: It is generally best to reduce the strength of your style Element when pairing it with the models: Lightning XL (Preset: Leonardo Lighting) and Anime XL (Preset: Anime)

Additional Notes:

  • Elements will behave differently when combined with different models and Preset Styles. It is best to experiment and see which combination of settings works best to create the aesthetic you want.

  • Elements will be trained using Flux as the base model by default. Any Elements trained on Flux are incompatible with SDXL models and vice versa.

  • For purity of style, it may be best to switch to Fast generation mode and then upscale the images that you like after. (SDXL Only)

  • Remember that modifying or deleting your dataset does not affect any Elements trained on it. If you update the dataset, a new Element will have to be trained on the dataset.


Resources

The sample dataset used in this tutorial can be downloaded here:

Did this answer your question?