Skip to main content

How to Manage LLM?

Create LLM

  1. Log in to the SERVICEME platform;
  2. Switch to the "Management" module;
  3. Go to "Agent Management > Model Management" to enter the model card list;
  4. Click Add to enter the add model page

Add Model

  • Name: The display name of the model. Example: GPT-5
  • Description: A brief description of this model. Example: GPT-5 is an artificial intelligence language model released by OpenAI in August 2025, integrating the large language processing capabilities of the GPT series with the deep reasoning functions of the o series. It supports sub-model scheduling and complex task processing.
  • Vendor: The vendor or platform providing this model service. Example: Azure, SiliconBase Flow
  • Provider Identifier: The parameter style of the model. Usually select OpenAI.
  • Model Type: Whether the model is a text model, embedding model, etc.
  • Model Capability: The types of capabilities the model possesses. The selection will affect the usage scenarios of the model in Agent.
  • Max Token Count: The maximum number of tokens supported for input by the model. The default value is 100,000.
  • Currency: The currency unit for model billing.
  • Price per Million Input Tokens: The cost for processing input tokens.
  • Price per Million Output Tokens: The cost for generating output tokens.
  • Temperature: The higher the temperature setting, the greater the randomness, which enhances the Agent's ability to generate more creative outputs. For some models, the temperature must be set to 1, such as GPT5.
  • Top P: Increasing top P expands the Agent's vocabulary range, thereby improving response diversity. However, this may reduce semantic consistency. Conversely, lowering top P will enhance logical consistency. For some models, top P must be set to 1, such as GPT5.

Add Model Client

After adding the model, you also need to add a client (the platform or server providing model capabilities) for this model before it can be used.

  1. Click to enter the model you just added;
  2. Click the New Client button in the upper right corner;

Create New Client

  • Provider Identifier: The parameter style of the model. Usually select OpenAI.
  • Name: The name of this platform or server, e.g., oai-eastus2
  • Deployment Name: The name of the model deployed on the platform. When calling the model later, this is the value to pass in the model field. For example: gpt-5-deployment, Qwen/Qwen3-VL-235B-A22B-Instruct
  • Base URL: The base address of this platform or server
  • API Key: The key required to call this model
  • Weight: When there are multiple clients, the higher the weight, the greater the probability of being called
  • Priority: When there are multiple clients, the higher the priority, the earlier it will be called. If the priority is the same, weight is mainly considered
  • Enable: Must be enabled to use
  1. After filling in, click Save to complete the creation of the Client
  2. You can see the list of Clients you just created on the model details page. Click the Test button to check whether the Client can successfully call the model