How to use the Singular AI Chat screen?
Step-by-Step Guide: Singular AI Chat
In this article, we provide a comprehensive guide on how to use the "Singular AI Chat" screen. This tool leverages generative artificial intelligence to create a chat interface that uses our Knowledge Node as its context. This enables the chat to perform semantic searches, generative semantic queries, and generative queries. As a result, the chat can answer questions, search for information inside the Nodes, and generate SQL code for querying the provided nodes.
Step 1: Configure the "Initial Prompt Engineering" settings
This settings tabs allow us to customize the initial information shown in the chat.
Settings fields:
- Welcome message for the chat: Customize the initial message show by the Singular AI chat.
- Amount of examples per row: Customize how many prompt examples are shown per row, this field can range from 1 to 3 per line.
- Prompt examples for ease of use:Add prompts to show users example of what he can ask in the chat.
Step 2: Configure "AI Workflow" settings
These settings allow you to customize the workflow that the prompt and information will follow. This configuration significantly impacts the output shown to the user and operates using blocks.
It has 3 type of blocks:
Semantic Search
This allow to process the meaning of word and phrases. It has the following fields:
- Input Variable:Insert the name of the variable or prompt representing the user prompt, this work as input information for the block.
- Knowledge Nodes: Select the node or nodes that the semantic search will use to retrieve information.
- Embedding Model: Choose the type of embedding model, which affects the performance of the artificial intelligence.
- Min. prediction score:Set the minimum prediction score required for the AI to consider a result relevant.
- Output Variable: Insert the name of the variable that will hold the output results.
Generative Query
- Prompt Engineering:Allows to customize, using output variables or prompt, the user prompt or instructions.
- Include user last prompt:Use the last user prompt as input for conversation context.
- Include prompt history: Use the prompt history as input for the system prompt for conversation context.
- Knowledge Nodes: Customize which nodes are going to be used to create the context for the response output.
- Limit rows when no Aggregators found: Limit the amount of rows used when no aggregator function is set in a column of the node.
- Model: Choose the model that will be use.
- Max Tokens: Determine the maximum length of the text generated as output. Increasing the max tokens increase the amount of text.
- Temperature: Customize the temperature parameter of the model for controlling the balance between predictability and creativity in the generated output. Low temperature is more predictable meanwhile high temperature is more random. This range from 0 to 2.
- Data Set Variable: Customize the name of variable where the dataset will be saved.
- Embedded Chart: Customize the name of the variable where the embedded will be saved.
Add LLM
- Prompt Engineering: Allows to customize, using output variables or prompt, the user prompt or instructions.
- Include prompt history:Use the last user prompt as input for conversation context.
- Model:Choose the model that will be use.
- Max Tokens: Determine the maximum length of the text generated as output. Increasing the max tokens increase the amount of text.
- Temperature: Customize the temperature parameter of the model for controlling the balance between predictability and creativity in the generated output. Low temperature is more predictable meanwhile high temperature is more random. This range from 0 to 2.
- Output Variable: Customize the name of the variable that will save the output generated by the LLM.
Updated 4 months ago