Skip to main content

Using Omni Models and the Inline Editor

Feature Release: Introducing Omni image generation models and Omni Editing, facilitated by the new Inline Editor.

Susan Bhasme avatar
Written by Susan Bhasme
Updated yesterday


Introduction

Omni models have greater contextual understanding of both image and text inputs, leading to more accurate, higher-quality results. Leonardo.Ai now offers two powerful new Omni Models – Black Forest Lab’s FLUX.1 Kontext and GPT-Image-1.

Both FLUX.1 Kontext and GPT-Image-1 are available to use as standalone image generation models and through the Omni Editing experience, giving you flexibility whether you're creating from scratch or refining something you've made.


Features

Omni Editing with the Inline Editor

Omni Editing is powered by the latest Omni models and facilitated by the Inline Editor, a new prompt bar that appears when viewing a generated image.

The Inline Editor lets you effortlessly handle anything from small tweaks to whole vibe changes. Instantly add references and type out instructions for what you want to change, and the editor will follow.

  • Use natural language to make quick, precise changes to your images while keeping the details you care about intact.

  • Add and edit text while preserving font and respecting contextual placement.

  • Reference up to 6 images* and simply describe how they should come together, so you can mix, match and blend all in the one step (e.g the font and heading from one, the character from another, and the style or lighting from the next).

*Note: Only GPT-Image-1 supports multi-image references at present.

More about Omni Models

Omni models come with various generation and editing capabilities. Below is a brief list of some of the features of each model:

FLUX.1 Kontext

  • 1 Reference Image maximum (When using the Inline Editor, the generated image is the singular Reference Image - multi-image reference support coming soon)

  • Maximum of approximately 500 words for prompt

  • Best for basic and extensive edits

  • Style transfer capabilities utilizing reference images

  • Maintains character consistency across edits

  • Best used for editing images

GPT-Image-1

  • 6 Reference Images maximum

  • Good for basic and extensive edits

  • Great for editing text

  • Maintains character consistency across edits

  • Combine different image inputs in various ways

  • Best used for generating new images, especially when combining elements from multiple references

  • Best used on High for Quality Mode.


How to use Omni Models

In the Image Creation tool

  1. Navigate to the Image Creation tool

  2. From the left sidebar, click on Models & Presets. Select elect either FLUX.1 Kontext or GPT-Image-1, then change generation settings (such as Quality and Aspect Ratio) as needed.

  3. [Optional] On the left of the prompt bar, click on the image button and add reference images (Up to 6 reference images for GPT-Image-1)

  4. Enter your text prompt, ensuring you include specific instructions on what to generate or how edits should be made (This is extremely important especially when multiple image references are in use)

  5. Click Generate

To edit generated images in the Image Creation tool

  1. Click on the desired image you would like to edit

  2. To switch between Omni models click on the settings button. You may also change the quantity and quality of images generated

  3. [Optional] Click on Add Image to include reference images (up to 6 via the Inline Editor for GPT-Image-1)

  4. Enter your text prompt, ensuring you include specific instructions on how edits should be made. (This is extremely important especially when multiple image references are in use)

  5. Click Generate

  6. To make iterative edits, you can click on the newly generated image to access the Inline Editor to do so.

From your Library

  1. Navigate to your Library

  2. Click on an image you would like to edit

  3. To switch between Omni models click on the settings button. You may also change the quantity and quality of images generated.

  4. [Optional] Click on Add Image to include reference images (up to 6 for GPT-Image-1)

  5. Enter your text prompt, ensuring you include specific instructions on how edits should be made. (This is extremely important especially when multiple image references are in use)

  6. Click Generate

  7. To make iterative edits, you can click on the newly generated image to access the Inline Editor to do so.

⚠️ Notice: It is recommended to avoid excessive iterative edits as this will degrade image quality.


Prompt Guide

As Omni models behave differently from standard image generation models, it is important that prompts are optimized for them. Included in this guide are some tips for prompting, especially for FLUX.1 Kontext.

For FLUX.1 Kontext:

  • Be specific: Precise language gives better results. Use exact color names, detailed descriptions, and clear action verbs instead of vague terms.

  • Start simple: Begin with core changes before adding complexity. Test basic edits first, then build upon successful results. FLUX.1 Kontext can handle iterative editing very well.

  • Preserve intentionally: Explicitly state what should remain unchanged. Use phrases like "while maintaining the same [facial features/composition/lighting]" to retain important elements.

  • Iterate when needed: Complex transformations often require multiple steps. Break dramatic changes into sequential edits for better control.

  • Name subjects directly: Use "the woman with short black hair" or "the red car" instead of pronouns like “her”, "it," or "this" for clearer results.

  • Use quotation marks for text: Quote the specific text you want to change: "Replace 'Leonardo' with 'da Vinci'" works better than generalized text descriptions (eg. "Change text on the sign").

  • Control composition explicitly: When changing backgrounds or settings, specify "keep the exact camera angle, position, and framing" to prevent unwanted repositioning.

  • Choose verbs carefully: "Transform" might imply a complete change, while "change the clothes" or "replace the background" gives you more control over what actually changes.

Tip: It is best to keep a good balance between being explicit with your instructions while keeping the overall prompt simple.

Prompt Examples (For FLUX.1 Kontext)

Basic Editing
Simply stating changes usually works for simple edits.
Change the car to red

Reference Image
Image result using the prompt: ​Change the car to red

Controlled edit with style preservation
You may state what aspect / element of the image you would like to maintain.
Change to daytime while maintaining the same style of the painting

Reference Image
Image result using the prompt: Change to daytime while maintaining the same style of the painting


Complex Edits
You may change multiple things in the input image as long as the prompt is not too overly complex

change the setting to daytime, add a lot of people walking on the sidewalk while maintaining the same style of the painting

Reference Image
Image result using the prompt: change the setting to daytime, add a lot of people walking on the sidewalk while maintaining the same style of the painting

Maintaining likeness of character

It is best to specify that the likeness of the character be maintained when making changes involving a character.

Reference Image


Transform the man into a viking warrior while preserving his exact facial features, eye color, and facial expression → This maintains the likeness while changing overall context.

Image Result with prompt: Transform the man into a viking warrior while preserving his exact facial features, eye color, and facial expression

Alternatively, Change the clothes to be a viking warrior → This maintains perfect identity and other elements while only modifying the specified element (in this case, the outfit of the character).

Image result with prompt: Change the clothes to be a viking warrior

💡 Tip: The usage of transform in a prompt will typically result in complete changes to the specified element. Alternative wording can be used if nuanced changes are desired.

Compositional control


This refers to editing backgrounds or changing scenes while maintaining the composition of the image. Simple prompts will typically change the composition somewhat. By specifying to maintain the placement / position, scale and pose of a character/ element, the composition can be better preserved.

Reference Image


Put him on a beach is a very vague prompt and will result in more overall changes.

Image result with prompt: Put him on a beach


Change the background to a beach while keeping the person in the exact same position, scale, and pose. Maintain identical subject placement, camera angle, framing, and perspective. Only replace the environment around them.

Image result with the prompt: ​Change the background to a beach while keeping the person in the exact same position, scale, and pose. Maintain identical subject placement, camera angle, framing, and perspective. Only replace the environment around them.

This preserves the position and pose better.

For GPT-Image-1:

  • Describe image combinations: When using more than one image reference, especially for a variety of purposes like including specific elements from each image. Ensure that you describe how each image should be used, being explicit with what elements to include.

Pricing

Please note that with GPT-Image-1, the cost increases by 1 token per Reference Image used. Multi-image support for FLUX.1 Kontext is coming soon.

Model

FLUX.1 Kontext

GPT-Image-1

Aspect Ratio

Flat Cost

Low Quality

Medium Quality

High Quality

Square (1:1)

50

20

60

215

Portrait (2:3)

50

25

85

320

Landscape (3:2)

50

25

85

320


FAQ

Is GPT-Image-1 the same as FLUX.1 Kontext?

No, they are two different models. FLUX.1 Kontext was developed by Black Forest Labs, while GPT-Image-1 was created by OpenAI.

How do I create an image using GPT-Image-1 and FLUX.1 Kontext?

Both models are available as standalone image generation models, accessible via the Models & Presets menu in the Image Creation page.

What is Omni Editing and the Inline Editor?

Omni Editing refers to the new way of editing generated images, powered by the new Omni models that have a strong contextual understanding of text and images. On Leonardo.Ai, Omni Editing is facilitated by the new Inline Editor, a prompt bar that appears when viewing a generated image, enabling effortless editing with chat-based instructions and Reference Images.

Can I change the style or format of images made with GPT-Image-1 and FLUX.1 Kontext without starting over?

Yes! You can use our new Inline Editor, a prompt bar that appears when viewing a generated image, enabling effortless editing with chat-based instructions and Reference Images.

Can GPT-Image-1 and FLUX.1 Kontext edit my own photos or images?

Yes, you can upload Reference Images to guide your generations. Note: Only GPT-Image-1 supports multi-image references at present

How does editing images with the Inline Editor work?

Simply click on a generated image to open the image viewer, where the Inline Editor will appear as a prompt box beneath it. From there, you can use simple instructional prompts or upload Reference Images to make the desired changes. Note: Only GPT-Image-1 supports multi-image references at present

How much does it cost to generate images with GPT-Image-1 and FLUX.1 Kontext?

The cost of generating GPT-Image-1 images varies depending on the image dimensions, image quality, and how many Reference Images are used (1 token per image). FLUX.1 Kontext generations are consistently priced at 50 tokens.


The quality and results of images from GPT-Image-1 is poor

We recommend setting the Quality Mode to High when using GPT-Image-1 for the best results.

Did this answer your question?