Text Chat
In the Lab, select the Text Chat from the left sidebar. The Text Chat is divided into LLM and multimodal scenarios. In both scenarios, the center area serves as the interaction area for dialogues. In the dropdown menu, choose the language model you want to use. Type your question in the prompt box and press Enter. Both the question and the response will appear in the central interaction area.
In the multimodal scenario, in addition to text dialogue, you can also upload images and/or match text to interact with the model.

The right sidebar is the parameter settings area for Text Chat. Depending on the model series, different configurable parameters may be supported. Refer to the glossary to adjust the parameters according to your needs for optimal generation results.

In the Text Chat parameter settings area, the first line is the System Prompt. This is the initial instruction or set of rules used to guide the model in generating specific types of responses. It defines how the model should behave, which information should be prioritized, and how to respond to user input in various scenarios. The System Prompt helps set the context, tone, style, and expected output format for the model.
The System Prompt provided by the user during text chat will influence the model's behavior and output throughout the conversation.
For example, System Prompts may include:
Specifying the model's tone (e.g., friendly, formal, humorous)
Defining the output format (e.g., answering in list form, providing detailed explanations)
Highlighting specific topics or areas of focus
Setting rules regarding information accuracy, ethical constraints, etc.
Last updated