Glossary & Conception
Familiarizing yourself with the following glossary terms will help enhance the effectiveness of the results generated in the Lab:
System Prompt: A preset instruction or guidance given to the model to steer its behavior, responses, and interaction style in a specific direction.
Temperature: Controls the randomness of the output. Higher values make the output more random and creative, while lower values make it more focused and deterministic. We recommend adjusting either the top_p or temperature parameter based on the application scenario, but not both simultaneously.
Max_tokens: The number of tokens for both input and output shared by the model, which limits the length of text the model can process.
Top p: Controls the randomness and diversity of the output text. A higher value results in more varied text. We recommend adjusting either the top_p or temperature parameter based on the application scenario, but not both simultaneously.
Presence Penalty: Applies a penalty to words that have already appeared in the generated content, encouraging the model to use new words and avoid repetition.
Frequency Penalty: Reduces the likelihood of repeating words by applying a penalty to words that have been used frequently.
Size: The aspect ratio of the generated image.
Seed: Using the same seed and prompt can produce similar images.
Inference Steps: The number of reasoning steps performed. More steps can produce higher-quality images but require more time.
Guidance Scale: A higher value makes the generated image more aligned with the prompt, but if set too high, it may lead to lower quality or distortion. Conversely, a lower value makes the image more free-form but may reduce its relevance to the prompt.
Last updated