diff options
| author | Grail Finder <wohilas@gmail.com> | 2025-12-21 11:39:19 +0300 |
|---|---|---|
| committer | Grail Finder <wohilas@gmail.com> | 2025-12-21 11:39:19 +0300 |
| commit | 1ca75a00642c4e0a6eea3117e3b4ebaacfdcfa7a (patch) | |
| tree | d91ad0317e90fe6de2f08623f4fa2db91f44d6bd /tutorial_rp.md | |
| parent | c001dedc7da5a8bf47e3b8f6700c3e50b88c6f34 (diff) | |
Chore: readme update
Diffstat (limited to 'tutorial_rp.md')
| -rw-r--r-- | tutorial_rp.md | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/tutorial_rp.md b/tutorial_rp.md index 6958e25..75d0616 100644 --- a/tutorial_rp.md +++ b/tutorial_rp.md @@ -42,6 +42,7 @@ then press `x` to close the table. #### choosing LLM provider and model +now we need to pick API endpoint and model to converse with. supported backends: llama.cpp, openrouter and deepseek. for openrouter and deepseek you will need a token. set it in config.toml or set envvar @@ -60,5 +61,12 @@ in case you're running llama.cpp here is an example of starting llama.cpp <b>after changing config.toml or envvar you need to restart the program.</b> +for RP /completion endpoints are much better, since /chat endpoints swap any character name to either `user` or `assistant`; +once you have desired API endpoint +(for example: http://localhost:8080/completion) +there are two ways to pick a model: +- `ctrl+l` allowes you to iterate through model list while in main window. +- `ctrl+p` (opens props table) go to the `Select a model` row and press enter, list of available models would appear, pick any that you want, press `x` to exit the props table. + #### sending messages messages are send by pressing `Esc` button |
