Integrate with RAGFlow
BrillAI is compatible with the community edition of Xinference, allowing it to integrate with RAGFlow through Xinference. The specific steps for integrating with Dify are as follows.
Once in RAGFlow, first click on the avatar in the top right corner, then select "Model Providers" on this page, locate Xinference, and click "Add the model."

In the pop-up page, fill in the relevant configurations. You can find the model type and model UID at https://inference.top/models (the model UID is the name of the corresponding model). Enter the base URL as "https://api.inference.top" and create an API key on our website.

After configuring the model here, you can proceed to the next step. Note that you need to configure at least the embedding, rerank, and chat models.

Next, click on "Knowledge Base" at the top to start creating a knowledge base. Enter the name, click "OK," and you’ll be taken to the knowledge base configuration page.

Locate the "Embedding model" option and select the embedding model we just added, which in this case is bge-m3. You can also choose any other embedding model you have configured. Adjust the other settings as needed, and click "Save" at the bottom of the page once you’re done.

After completing the above configuration, you will be automatically redirected to the "Dataset" page, where you can click "Add file" to upload knowledge base files.

Once the file upload is complete, processing will not start automatically; you need to click the "Start" button in the "Action" section to begin.

Once the knowledge base is successfully created, go to the "Chat" page and click "Create an Assistant" to set up an assistant. In the pop-up window, enter any name and select the knowledge base you just created under "Knowledgebases." Then click on "Prompt Engine" at the top to complete the remaining configurations.

On the "Prompt Engine" page, select the rerank model that was just added. Finally, go to the "Model Setting" page.

On the "Model Setting" page, select the chat model you just configured. At this point, all configurations are complete, and you can begin conversing with the model.

Click the “plus icon” next to "Chat" to start a conversation with the model. During the conversation, it will automatically search for relevant content within the knowledge base.

Last updated