Skip to main content

Model Management

Create a New Model Set

Administrators can create a new model set by following these steps:

  1. Navigate to the Model Set Management Page: Go to Model Management, then click on "Model Set".
  2. Click "New": On the right side of the page, click the "New" button to start creating a new model set.
  3. Select Model Type: In the pop-up window, select the type of model. Available types include:
    • LLM (Large Language Model)
    • Embedding (Embedding Model)

💡 Tip: The system only supports adding one Embedding.

  1. Select Language Model: Choose the appropriate language model according to your needs. Currently supported language models include:
  • LLM language models: OpenAI, Deepseek, Azure, Ollama, Tongyi, Qianfan;
  • Embedding language models: OpenA Embeddings, Azure Embeddings, AliEmbeddings, Ollama Embeddings;
  1. Fill in Model Set Information: Enter the name and description of the model set, ensuring the information is clear and accurate (name within 50 characters, description within 200 characters).
  2. Select Additional Settings: Choose whether to support image Q&A and whether it is an inference model as needed.
  3. Confirm Creation: After completing the information, click the "Confirm" button to finish creating the model set.

By following these steps, administrators can successfully create a new model set and configure the corresponding settings.

tip

When using an Embedding model, please note the vector dimension limits:

  • PgSQL vector field supports up to 2000 dimensions;
  • When purchasing an Embedding model, ensure its output vector dimension is less than 2000 (some models can be adjusted in the configuration);
  • When adding a vector model in the system, please enter a vector dimension value less than 2000, otherwise it may cause storage exceptions or indexing failures.
  • It is recommended to set the Token context limit to 8192.

Default Model Settings

In model management, administrators can set default models to specify suitable models for different usage scenarios. For example, in BI (Business Intelligence), Translate, and other scenarios, the default model can be set to the Azure-4o model. In this way, the system will automatically use the preset default model in the corresponding scenario, improving work efficiency and consistency.

The setup steps are similar to creating a model set. Administrators can select the appropriate model as the default model for the scenario according to actual needs.

Usage Scenarios

Feature NameDescriptionTypical Scenario Example
RAGRetrieval-augmented generation with knowledge base to improve the accuracy and reliability of LLM answersEnterprise knowledge base Q&A, intelligent customer service
i18n translationMulti-language translation and UI internationalization for global deploymentAI products for overseas users, international operation platforms
gallery ssn writingRecords each user interaction or content generation process for easy review and secondary editingConversation history archiving, content creation retention, version tracing
gallery rednoteUsers can highlight key information or write notes to assist review and content auditingReviewing AI-generated content, collaborative creation, highlighting key segments
gallery mindmapConverts text content into structured mind maps to enhance information understandingProject sorting, knowledge graph generation
optimize promptOptimizes user input prompts to improve model understanding and output qualityAssist in rewriting unclear user input, low-barrier question optimization
recommend questionAutomatically recommends the next possible interesting or related question to enhance interaction experienceChatbot conversation continuation, recommendation guidance
gallery chat leadProvides conversation guide templates or "starter words" to help users initiate clearer questions or creation requestsChat template library, creation prompts
recommend configAutomatically recommends LLM parameter configurations (such as temperature, whether to use RAG) based on tasksAgent configuration panel, low-code/no-code intelligent recommendation
pdf_markdownParses PDF files into Markdown structured format for easy reading and further processingDocument import to knowledge base, summary generation
translateAutomatically translates user input or model output for cross-language communicationMultilingual conversation, multilingual customer service
BIUses LLM to process structured data and generate visual analysis or business insightsNatural language analysis reports, chart generation, BI Q&A assistant
llm_ocrExtracts text from images into structured text and combines with LLM for semantic understandingImage Q&A, form recognition, PDF screenshot interpretation, image document search, etc.

Create Model Group

Administrators can create model groups in model management. The created model groups can be configured for assistants when creating them.

The steps to create a model group are as follows:

  1. Navigate to the Model Group Management Page: Go to Management, select "Model Management", then click "Model Group".
  2. Click "New Model Group": On the right side of the page, click the "New Model Group" button to start creating a new model group.
  3. Enter Model Group Name: Assign a unique name to the model group for easy identification (within 50 characters).
  4. Select Models: Select the models to be included in the group from the available model list; multiple selections are supported.
  5. Choose Whether to Enable Adaptive Model Deployment: Enable adaptive model deployment as needed to improve model flexibility and adaptability.
  6. Choose Whether to Enable Deep Thinking Model: Enable deep thinking model as needed to enhance the model's intelligent processing capabilities.
  7. Click "Save": After confirming all settings are correct, click the "Save" button to successfully create the model group.

Model Group Channel Details

After creating a model group, you can enter the "Channel Details" page to view all configured channels. You can:

  • Create a new channel;
  • Or click "Key Details" on the right to view all key information under the channel and create new keys.

This page makes it easy to centrally manage all channels and their corresponding API Keys.

The API Keys provided by our platform are highly independent and authorized. Each API Key is equivalent to an independent "pass" with full access and invocation permissions, as detailed below:

  • Independent of the Platform User System
    The caller holding the API Key does not need to be a registered user of the platform or have any specific user permissions. As long as a valid API Key is included in the request, it will be regarded as an authorized request, and the system will process and respond accordingly.

  • Not Restricted by License
    Calls initiated via API Key do not occupy the platform's user License quota. Therefore, even if the actual number of registered users on the platform is limited, more service integrations and business scenarios can be flexibly supported via API Key, enabling large-scale usage without extra authorization or expansion.

  • Flexible Configuration as Needed
    Different API Keys can be configured with different validity periods and permission scopes (such as access modules, data ranges, etc.) as needed, to suit different integrated systems or business parties. It is recommended to generate a separate API Key for each integration party for easier management and usage tracking.

⚠️ Security Tip: Please manage your API Keys properly to avoid leakage. If abused externally, all requests initiated with the Key will by default have full permissions, which may pose risks to data and system security.

Model Configuration Instructions

This product supports integration with the following enhanced features, all of which rely on services provided by Azure or external platforms. You need to select and obtain the necessary access credentials (API Key, Endpoint, etc.) from the relevant platforms according to your needs:

  1. Voice Input Feature (Whisper Service)
    • Service Description: Purchase and deploy the Whisper service on Azure to enable speech-to-text functionality.
    • Configuration Method: Supports configuring multiple Whisper Keys via environment variables; the variable name must clearly contain "Whisper". If not configured, the voice input button will not be displayed.
    • Compatibility Note: Also supports configuring the Whisper service via initialization SQL, which will automatically write the configuration into environment variables and supports later modification.

  1. Azure OCR Mode (Azure Document Intelligence)
    • Service Description: Provides OCR capabilities based on Azure Document Intelligence, supporting both "Basic" and "Advanced" recognition modes.
    • Configuration Method: You need to set the KEY and Endpoint for Azure OCR mode in the environment variables. If not configured, this OCR mode cannot be selected.
    • Interaction Prompt: The interface will automatically display available modes based on the configuration status and restrict selection of invalid options.

tip
  • To purchase services or obtain API Keys and Endpoints, please visit the official Microsoft Azure website or the relevant service provider's page and select the appropriate pricing plan as needed.
  • We recommend that customers evaluate data security, response time, and pricing factors before configuration. For deployment support, please contact the technical support team.