Skip to main content
All CollectionsBuilding Bots
Advanced Bot Builder
Advanced Bot Builder
Oriol Zertuche avatar
Written by Oriol Zertuche
Updated over a week ago

Advanced Mode Settings

The Advanced Bot Builder gives you the freedom to customize multiple parameters and lets you build the perfect bot that suits your use case. Currently, you can customize the following parameters:

  1. Prompt

  2. Relevance Score

  3. Token Distribution

  4. Persist Prompt

Prompt

The prompt will define the personality of your bot. To simplify the prompting process, consider the bot as an employee at your business. Although there is no specific structure for writing a personality prompt, we have prepared a list of parameters for your reference.

Parameter

Example

AI’s Nickname

“You are Cody.”

Job Profile

“You work as an IT Support Engineer at my company.”

Tone

“Maintain a professional and friendly demeanour throughout all interactions, ensuring that users feel comfortable and supported.”

Domain

“Technology, Support, Technology Consultant”

Expertise Level

“Remember to convey a sense of expertise and confidence in your responses.”

Ethical Guidelines

“Do not make up your own responses or use data which does not exist in the training data. Refrain from mentioning ‘unstructured knowledge base’ or file names during the conversation.”

Additional Services (optional)

“Try to promote sales of XYZ wherever possible.”

Default Case (For unanswerable queries)

“In instances where a definitive answer is unavailable, acknowledge your inability to answer and inform the user that you cannot respond.”

System prompts can be used to set generic instructions for the bot, such as the tone, format, language, personality traits, roles, and tone guidelines. They have a stronger effect on the responses and must be set carefully; otherwise, they might limit the scope of response generation for the bot.

A sample prompt:

“You are the waiter at a pizza joint. Maintain a professional and friendly demeanor throughout all interactions, ensuring that users feel comfortable and supported. Remember to convey a sense of expertise and confidence in your responses. Additionally, I encourage you to actively promote our premium pizzas whenever appropriate. Do not refer to any menu sources other than the ones provided in the knowledge base. While recommending pizzas, state their prices and any offers that are applicable for them, too.”

Relevance Score

The Relevance Score reflects the degree of similarity between the user’s query and Cody’s response. Using semantic search, Cody compares the user’s query with the data present in the knowledge base. A higher relevance score will result in a precise answer but will compromise on understanding the overall context of the query and vice versa. In simple words, the relevance score is the degree to which the AI fears making mistakes and taking risks while responding.

Token Distribution

The token is the computational currency for large language models like the GPT family. The query (input statement) asked by the user is broken down into blocks of characters known as ‘tokens’. As AI models are really resource-intensive, in order to address the computational constraints and memory limitations, these models have a certain limit to the input data that can be processed and generated. This limit is the ‘context window’.

Context Window

Cody uses the GPT family of models and the number of tokens available are limited. The token distribution feature helps in micro-managing the usage of tokens for different purposes.

They are mainly divided into Context, History and Response Generation.

  1. Context: The tokens required for understanding the user query and knowledge base context.

  2. History: The tokens required for adding context to the user query using the chat history.

  3. Response Generation: The tokens required for assessing the coherence, grammar, and semantic validity of the generated text.

For the highest accuracy is important that the context makes up a large portion of the Token Distribution.

Persist Prompt

By continuously reinforcing the prompt (personality of the bot), you create a form of conversational context and constraint that keeps the AI on track and helps maintain compliance with the desired outcomes. It acts as a reminder for the AI to stay within the predefined boundaries and provide responses that are relevant, accurate, and aligned with your goals.

Example Templates


Take a look at the following templates to understand how the Bot Builder was used to build the following use cases.

Creative AI

Unlock the potential of innovative ideas with a personalized Creative Assistant designed to support your business in creative tasks.

You are Cody, a Creative Assistant dedicated to providing users with creative ideas, solutions, and content, using the content from the knowledge base as context.

When formulating your responses, ensure they are supported by the knowledge base. Refrain from mentioning ‘unstructured knowledge base’ or file names during the conversation

Settings

Field Type

Context
Percentage of tokens used to provide knowledge base context.

30%

Chat History
Percentage of tokens used to provide chat history.

40%

Response
Percentage of tokens allocated for the AI’s generated response.

30%

Persist Prompt
Maintain AI compliance by continuously re-emphasizing the prompt.

Off

Relevance Score
Trade-off between fewer knowledge base contexts for higher accuracy and response completeness.

Low

Factual AI

Experience the power of instant knowledge at your fingertips with a personalized Factual Assistant.

You are Cody, a Factual Research Assistant dedicated to providing accurate information. Your primary task is to assist me by providing me reliable and clear responses to my questions, based on the information available in the knowledge base as your only source.

Refrain from mentioning ‘unstructured knowledge base’ or file names during the conversation.

You are reluctant of making any claims unless they are stated or supported by the knowledge base. In instances where a definitive answer is unavailable, acknowledge your inability to answer and inform the me that you cannot respond. Your response must be in the same language as my message.

Settings

Field Type

Knowledge
Percentage of tokens used to provide knowledge base context.

60%

Chat History
Percentage of tokens used to provide chat history.

20%

Response
Percentage of tokens allocated for the AI’s generated response.

20%

Persist Prompt
Maintain AI compliance by continuously re-emphasizing the prompt.

On

Relevance Score
Trade-off between fewer knowledge base contexts for higher accuracy and response completeness.

Low

View additional templates: https://meetcody.ai/use-cases

Did this answer your question?