Skip to main content

How to Use Data?

Concept

The data module is designed to help enterprises efficiently manage, utilize, and share their data assets. Through flexible data interfaces, metadata management, data source configuration, and data import features, users can fully control and optimize the data flow process. The product offers an intuitive interface for data querying, preview, and operations, along with data subscription and push functionalities to ensure real-time accuracy.

Overall Architecture

Data Assets

As the entry point of the data module, the Data Assets section provides an overview that enables enterprise users to view the quantity and details of various data assets within the system. This helps users gain a comprehensive understanding of the current data landscape and distribution.

Search Data Assets

The search function (magnifying glass) allows users to quickly locate data assets using keywords, asset categories, and other information.

View Asset Data and Metadata

Each data asset card offers quick links to data preview and metadata views.

Clicking on the card's central area enters the data preview interface, where users can see a portion of the data in the corresponding table. Scrolling down will load more data.

If the ChatX module is enabled, a Copilot dialogue interface appears on the right, allowing users to query the data using natural language and gain insights from the system.

Clicking on the [View Metadata] button at the top-right corner of the card leads to the metadata view interface. This section displays the metadata including table name, comments, and remarks, as well as information on all fields in the table (field names, comments, and types).

Click the [View] button to enter the detailed page of each field.

Data Interface

The data interface section offers streamlined functionalities for creating, modifying, and testing data interfaces. Users can efficiently develop and maintain data interfaces, with support for interface log querying to ensure stability and traceability.

Instructions:

  • Search: Filter interfaces by name or data entity name.
  • New: Click to define a new data interface.
  • Interface Logs: Enter the interface log view page.
  • Authorize: Grant access permission to specified clients; interfaces cannot be accessed without client binding.
  • Row Operations:
    • Details: View detailed information about the interface.
    • Edit: Modify an already created interface.
    • Logs: View logs related only to this interface.
    • Enable/Disable: Disabled interfaces cannot be called but are not deleted and can be re-enabled at any time.
    • Delete: Delete the interface.

Interface Details:

Create Data Interface

Creating an interface adds a new data interface to the system. Users can define different interfaces for various data entities, conditions, and fields. Currently, only single-entity query interfaces are supported. For multi-entity queries, combine entities into a view on the upstream platform before using the data interface to add conditions for querying.

Field Description:

  • Category: The interface's category, useful for grouping when there are many interfaces.
  • Interface Name: Unique identifier for the interface.
  • Interface Address: Access address for the published interface; must be unique.
  • Data Source: The interface's data source, which must first be configured in the system.
  • Data Entity: The data entity from which the interface queries data.
  • Input Parameters: Define one or more input parameters to filter data, with operators like equals, contains, and greater than. The caller provides these parameters.
  • Fixed Parameters: Parameters/conditions that do not require input from the caller, such as always querying data for a specific region.
  • Pagination: If enabled, use pageSize and pageIndex for pagination. Returns totalCount and totalPage.
  • Sorting: Specify sorting fields and directions (ascending or descending).
  • Return Parameters: Control which fields are returned by the interface; unchecked fields will not appear in the query response.

Data Interface Authorization

All data interfaces must be authorized before external systems can call them. A client represents an external system, typically with one client per external system. Before authorization, the SERVICEME admin needs to add the client to the system for selection.

Interface Testing (Preview)

Test created data interfaces to verify expected functionality through detailed input/output messages.

Note: Authorization must be completed before testing, and a calculated Authorization value must be included in the Header. Refer to [Technical Guide/DATA Integration] for the Authorization algorithm.

View Interface Logs

View logs for all data interface queries over a recent period, including caller IP and response messages.

Data Subscription

Unlike data interfaces, data subscriptions involve SERVICEME actively pushing data to external systems. Users can set data subscription conditions and frequency to check for qualifying data at specified times. If conditions are met, the relevant data is pushed to the designated interface.

Instructions:

  • Search: Search subscriptions by name, etc.
  • New: Fill out parameters to create a new subscription.
  • Trigger Now: Manually trigger data checks and pushes.
  • Delete: Remove subscriptions, halting triggers and data pushes.
  • Push Logs: View push logs across subscriptions.
  • Row Operations:
    • Enable/Disable: Disable to stop triggers and data pushes.
    • View Push Logs: View logs for the specific subscription.

Create and Edit Data Subscriptions

Field Description:

  • Data Configuration:
    • Name: Unique identifier for the subscription.
    • Data Source: Source of the subscription data.
    • Data Entity: Entity for data query (table or view).
    • Filter Rules: Define conditions for data qualification.
    • Data Fields: Fields to be pushed, with optional aliases and default values.
  • Subscription Configuration:
    • Push Method: Currently supports HTTP.
    • Frequency: Push frequency, supporting CRONS (5 fields) expression.
    • Batch Size: Control batch size if matching data exceeds the limit.
    • API: Full API address (including domain and path) for receiving data.
    • HTTP Method: Supports POST and PUT.
    • Headers: Define required headers, e.g., for authorization.
    • Body: Format of data in JSON, with data array in the data attribute.

Special Rules:

  • Date fields support date variables for easy queries within specific time frames.

Date Variable Examples:

$utcnow-1m$ Last minute
$utcnow-1h$ Last hour
$utcnow-1d$ Last day
$utcnow-1w$ Last week
$utcnow-1M$ Last month
$utcnow-1y$ Last year

Combination
$utcnow-1h20m$ Last hour and 20 minutes

Push Log Query

View all push logs to understand the status of data subscriptions. If the recipient reports missing data, use logs for troubleshooting.

Data Source

Supports multiple data sources for convenient data input and configuration. Users can easily add and configure data sources to ensure reliability.

Instructions:

  • Search: Filter by ID/name/type.
  • New: Open data source creation form.
  • Sync Metadata: Immediately sync metadata for this source.
  • Delete: Delete the data source and cease metadata syncing.
  • Row Operations:
    • Click Link: View source information (editing only for security).

Create Data Source

Choose from supported sources: Excel, Azure SQL, Azure Databricks.

For Azure SQL, enter basic info and connection string.

Field Description:

  • ID: Unique identifier for the data source.
  • Name: Display name.
  • Tag: Used for categorizing data sources.
  • Connection String: Enter and test the connection.

After testing, proceed to metadata selection.

Instructions:

  • Sync All: Sync all metadata, including future updates.
  • Select Sync: Sync selected metadata only.

Click Save to complete creation.

Metadata

Built-in metadata management supports the creation, editing, and management of data domains, aiding enterprises in building a clear data architecture.

Instructions:

  • Search: Filter by entity name/display name/data source.
  • Sync Metadata: Sync the entity and fields immediately.
  • Set Data Domain: Assign metadata to a data domain.
  • **Data

Domain Management**: Manage data domains.

  • Row Operations:
    • Entity Name Link: View metadata details.

Edit Metadata

Editing metadata allows users to add meaningful names and descriptions to tables and fields, enhancing system understanding and stability for AI-driven queries.

Manage Data Domains

Manage data domains by creating different domains for classified data asset management.

Data Import

Supports data import via file or manual template creation, enabling efficient import of new or existing tables, ensuring flexibility and accuracy.

Instructions:

  • Search: Filter by table name.
  • Template Management: Create/edit import templates (required before data import).
  • Import Data: Execute data import.
  • Delete: Remove imported tables.
  • Column Operations:
    • Column Name Link: View table data.
    • Edit: Modify table domain.
    • Export: Export the entire table.

Template Creation

Create templates manually or from a file.

  1. Manual Creation:

    • Define template structure in the interface.
    • Add/edit/delete fields, set types and attributes.
  2. File-Based Creation:

    • Upload an Excel file with structure.
    • Choose specific sheets as template bases.

Define field name, type, description, required fields, and validation rules.

Manual Template Creation

Instructions:

  • Template Name: Unique identifier.
  • Add Field: Adds new column.
  • Field Options:
    • Parameter Name: Used in data table.
    • Display Name: Shown in imported data.
    • Type: Field type (String, Number, Boolean, DateTime).
    • Primary Key: If updating, specify a primary key.
    • Settings: Define validation rules.

Validation Rules:

  • Range: Constrain values within a range.
  • Value Set: Define enumerations.
  • Unique: Ensure unique values.
  • Foreign Key Constraint: Validate existence against another table.

File-Based Template Creation

Choose Sheet:

Columns are inferred from file structure, allowing similar options as manual creation.

Data Import

Data import process steps:

  1. Select Data File:

    • Upload an Excel/CSV file and select the desired Sheet.

  2. Data Preview and Validation:

    • Preview data before final import.

  3. Choose Import Method:

    • Create New Table: Import into a new table.

    • Import to Existing Data: Add/update existing table data.
      • Add: Always adds new records.
      • Update or Add: Updates existing records or adds new ones.
      • Overwrite: Overwrites entire table data.

  4. Execute Import:

    • System matches file columns to template fields.
  5. Error Handling:

    • Detailed logs for errors allow for correction and re-import.