Skip to main content

Getting Started

Navigate ToltIQ's home page features, understand browser requirements, learn best practices for document comparison and chat prompts, explore available LLM models and their capabilities, tutorial videos, and understand export options for your analysis.

Written by Maya Boeye
Updated over 3 weeks ago

What is the Home page?

Think of the Home page like an easy button. The homepage directs you to the major platform functions which enable you to quickly start analysis. The Home page allows you to:

  • Utilize Quick Chat

  • Create a new deal

  • View your recent deals

  • View your company's weekly leaderboard

Access User Settings represented by the Gear icon

Which browser should I use to access ToltIQ?

We support Safari but would not recommend using it, as performance is demonstrably better using Chrome and Edge.

What's the best way to compare information in multiple documents?

If the documents are already in a Deal, create a new Chat and enable the Restricted Chat setting to select the specific documents you want to include in the analysis.

  • If you don't have a Deal yet, then create the Deal first and upload the documents

Once the Chat is established, determine the best model for the Q&A session. Refer to the diagram below for additional information on determining the best model for your specific needs.

  • What is the Exports page?

    The Exports page allows you to export multiple Chat outputs simultaneously.

    To export one or multiple Chats, navigate to Exports in the Deal menu at the top of your screen. Here, you can select which Chats you'd like to export and the format you'd like them exported in.

    Chats can be exported as:

    • Word (table): Exported as a Microsoft Word document in a table format containing the date and time of the Chat, Chat Name, User Message, and AI Response

    • Word (clean): Exported as a Microsoft Word document in Q&A paragraph format

    • Word (clean, answers only)

    • Excel: Exported as an Excel spreadsheet, this option also provides additional LLM data

    What is the Retrieval Strategy?

    The Retrieval Strategy is the embedding logic used to match your question to the contents in the documents and then retrieve chunks of data that will be refined to create an answer.

    We currently employ a Hybrid Serverless Retrieval Strategy.

    What are some helpful tips for when I'm in a Chat?

    • "ABC" is the same as "XYZ" (this is in the rare case where the AI model may not realize the first term is similar to the second term, and you want the model to understand.)

      • Ex: Base Rate is the same thing as Interest Rate

      • Ex: Top line, sales, and revenue are all the same

    • Be brief

    • Be concise

    • Be detailed

    • Be specific

    • Be verbose

    • Don't elaborate

    • Enumerate

    • Extract the following information

    • Ignore xxx (where xxx is something you've seen in a prior response or is content you know exists in a document but isn't relevant)

    • Take it step by step

    • Take your time

    • Think carefully

    • Use bullet points

    • Use a numbered list

    • Use a table format

    Can I stop an answer mid-stream?

    Yes, Select Stop (located in the same place as the Send button).

Where can I find tutorial videos?

You can find some of our tutorial videos here:

You can find informational documents here:

What is a Large Language Model (LLM)?

A Large Language Model (LLM) is an advanced type of AI that can understand and produce text similarly to humans by analyzing vast amounts of written language from books, websites, and other sources. It can answer questions, generate stories, help with summaries, and assist with complex analytical tasks by learning patterns and relationships between words. While it doesn't "understand" language the way humans do, it can generate meaningful responses based on its training data.

What models are available?

Anthropic

  • 4.5 Sonnet, 4.5 Opus, 4.6 Opus (Medium), 4.6 Sonnet

OpenAI

  • GPT5.1 (Nov 2025), GPT5.4 (Mar 2026)

Google

  • Gemini 3.0 Pro Preview (Preview, Nov 2025)

What are the differences between the various models?

Each model has been trained on potentially different content, has different rules set by the provider, and has a different token capacity for content context (token count is comparable to word count, though it's not an exact replica).

  • The number following the model name (8K, 16K, 32K, 128K, 200K) indicates the maximum number of tokens the model can process in a single prompt/response cycle (aka the Context Window).

  • Longer token capacities are useful for more extensive interactions, while shorter ones are sufficient for most general use cases.

  • Use Case Suitability: The choice between these models depends on the specific requirements of a task, including the complexity of language processing needed and the length of the text to be processed.


Did this answer your question?