From 1ca75a00642c4e0a6eea3117e3b4ebaacfdcfa7a Mon Sep 17 00:00:00 2001 From: Grail Finder Date: Sun, 21 Dec 2025 11:39:19 +0300 Subject: Chore: readme update --- tutorial_rp.md | 8 ++++++++ 1 file changed, 8 insertions(+) (limited to 'tutorial_rp.md') diff --git a/tutorial_rp.md b/tutorial_rp.md index 6958e25..75d0616 100644 --- a/tutorial_rp.md +++ b/tutorial_rp.md @@ -42,6 +42,7 @@ then press `x` to close the table. #### choosing LLM provider and model +now we need to pick API endpoint and model to converse with. supported backends: llama.cpp, openrouter and deepseek. for openrouter and deepseek you will need a token. set it in config.toml or set envvar @@ -60,5 +61,12 @@ in case you're running llama.cpp here is an example of starting llama.cpp after changing config.toml or envvar you need to restart the program. +for RP /completion endpoints are much better, since /chat endpoints swap any character name to either `user` or `assistant`; +once you have desired API endpoint +(for example: http://localhost:8080/completion) +there are two ways to pick a model: +- `ctrl+l` allowes you to iterate through model list while in main window. +- `ctrl+p` (opens props table) go to the `Select a model` row and press enter, list of available models would appear, pick any that you want, press `x` to exit the props table. + #### sending messages messages are send by pressing `Esc` button -- cgit v1.2.3