Skip to main content

Environment Inspection and Confirmation


Feature Description

After deployment on the SERVICEME platform is completed, the administrator needs to perform environment inspection and confirmation operations to ensure that model configurations, system dependencies, and authorizations are all in a valid state.
This step is a key part of ensuring the stable operation of system functions (such as document recognition, speech recognition, translation, RAG, etc.).

Inspection Scope

Environment inspection mainly includes the following modules:

Inspection ItemDescriptionRequired
Model SetCheck whether it includes models within the standard supported scope (GPT, Embedding, OCR, STT, etc.).Yes
Model GroupCheck whether each model group is configured correctly and available.Yes
Default Model SettingCheck whether the binding relationship between default models and scenarios is correct.Yes
System / ENV Environment VariablesCheck key system variables and model connection status (such as the availability of OCR, Whisper, and Embedding models).Yes

Inspection Steps

Open Model Management

Go to Management > Model Management and check the following items one by one:

Model Set

  • Confirm whether the following standard models exist:
    • LLM
    • Embedding
  • If any are missing, please contact the system administrator to re-import the model set.

Model Group

  • Check whether model groups have been configured according to business scenarios, for example:
    • Chat / RAG / Translation / PDF Parsing / OCR, etc.
  • Confirm that the models referenced in each model group are consistent with the actual supported scope.

Default Model Setting

  • Go to the "Default Model Setting" page and confirm the default bound models item by item (as shown in the example below):
    • translate → gpt-4.1-mini
    • gallery rednote → gpt-4.1-mini
    • recommend config → gpt-4.1-mini
    • gallery chat lead → gpt-4.1
    • optimize prompt → gpt-4.1
    • rag → gpt-4.1
    • i18n translation → gpt-4.1-mini
    • gallery mindmap → gpt-4.1-mini

Note: For tasks with high computational load or higher reasoning requirements (such as knowledge retrieval, complex problem analysis, Prompt optimization, etc.), models with stronger performance should be prioritized;

Note: For lightweight scenarios (such as text translation, summary generation, daily copywriting processing, etc.), models with faster response speed and lower cost can be selected to balance performance and efficiency.

Common Issues and Handling

IssuePossible CauseSolution
OCR call failedAPI Key is invalid or not configured correctlyUpdate the key again in the environment variables
Whisper not respondingModel is not enabled or the server side is not deployedCheck the model group configuration and deployment status
Default model setting is emptyLicense is incomplete or import failedConfirm the License file and authorization scope
Call latency is too highNetwork access to external APIs is unstableIt is recommended to use a model service in the same region as the deployment location