LLM Settings
Last updated
Last updated
Allow users to configure various aspects of the AI model.
Tailoring the AI model improves its ability to interpret questions and generate accurate SQL queries or analyses that align with your business.
Adjust AI model parameters, refine instruction prompts, and update guidelines for query generation based on evolving needs.
The LLM Settings allow users to configure various aspects of the AI model. These settings include:
Model Provider – The AI provider being used (e.g., OpenAI).
Model Version – The specific model assigned to the assistant (e.g., GPT-4o-mini).
Max Output Tokens – Limits the number of tokens in the assistant’s responses to control verbosity.
Temperature – Adjusts randomness in responses (lower values make outputs more deterministic).
Top P – Controls the probability distribution of generated responses.
Frequency Penalty & Presence Penalty – Influence the diversity and uniqueness of responses.
Chat History Length – Determines how many past messages are considered in a conversation.
Prompt – Allows users to define the assistant’s system behavior and response style.
These settings enable precise control over how the assistant generates insights and interacts with users.