summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGrail Finder <wohilas@gmail.com>2026-01-10 09:41:59 +0300
committerGrail Finder <wohilas@gmail.com>2026-01-10 09:41:59 +0300
commit505477b8e388ee351f724e1b389db549bb4ce003 (patch)
treeff672db7e78f879dc26b6ac11f3cc84a26550aa3
parent51916895789fc2b166af752c14cd4696d6517bec (diff)
Doc: readme update
-rw-r--r--README.md53
-rw-r--r--assets/ex01.pngbin69006 -> 224192 bytes
-rw-r--r--assets/helppage.pngbin0 -> 180067 bytes
-rw-r--r--docs/tutorial_rp.md2
4 files changed, 5 insertions, 50 deletions
diff --git a/README.md b/README.md
index 43cba31..7decc07 100644
--- a/README.md
+++ b/README.md
@@ -23,61 +23,14 @@ make
#### keybindings
while running you can press f12 for list of keys;
-```
-Esc: send msg
-PgUp/Down: switch focus between input and chat widgets
-F1: manage chats
-F2: regen last
-F3: delete last msg
-F4: edit msg
-F5: toggle fullscreen for input/chat window
-F6: interrupt bot resp
-F7: copy last msg to clipboard (linux xclip)
-F8: copy n msg to clipboard (linux xclip)
-F9: table to copy from; with all code blocks
-F10: switch if LLM will respond on this message (for user to write multiple messages in a row)
-F11: import json chat file
-F12: show this help page
-Ctrl+w: resume generation on the last msg
-Ctrl+s: load new char/agent
-Ctrl+e: export chat to json file
-Ctrl+c: close programm
-Ctrl+n: start a new chat
-Ctrl+o: open image file picker
-Ctrl+p: props edit form (min-p, dry, etc.)
-Ctrl+v: switch between /completion and /chat api (if provided in config)
-Ctrl+r: start/stop recording from your microphone (needs stt server or whisper binary)
-Ctrl+t: remove thinking (<think>) and tool messages from context (delete from chat)
-Ctrl+l: rotate through free OpenRouter models (if openrouter api) or update connected model name (llamacpp)
-Ctrl+k: switch tool use (recommend tool use to llm after user msg)
-Ctrl+j: if chat agent is char.png will show the image; then any key to return
-Ctrl+a: interrupt tts (needs tts server)
-Ctrl+g: open RAG file manager (load files for context retrieval)
-Ctrl+y: list loaded RAG files (view and manage loaded files)
-Ctrl+q: cycle through mentioned chars in chat, to pick persona to send next msg as
-Ctrl+x: cycle through mentioned chars in chat, to pick persona to send next msg as (for llm)
-Alt+1: toggle shell mode (execute commands locally)
-Alt+4: edit msg role
-Alt+5: toggle system and tool messages display
-
-=== scrolling chat window (some keys similar to vim) ===
-arrows up/down and j/k: scroll up and down
-gg/G: jump to the begging / end of the chat
-/: start searching for text
-n: go to next search result
-N: go to previous search result
-
-=== tables (chat history, agent pick, file pick, properties) ===
-x: to exit the table page
-
-trl+x: cycle through mentioned chars in chat, to pick persona to send next msg as (for llm)
-```
+![keybinds](assets/helppage.png)
#### setting up config
```
cp config.example.toml config.toml
```
-set values as you need them to be.
+set values as you need them to be;
+[description of config variables](docs/config.md)
#### setting up STT/TTS services
For speech-to-text (STT) and text-to-speech (TTS) functionality:
diff --git a/assets/ex01.png b/assets/ex01.png
index b0f5ae3..90ad254 100644
--- a/assets/ex01.png
+++ b/assets/ex01.png
Binary files differ
diff --git a/assets/helppage.png b/assets/helppage.png
new file mode 100644
index 0000000..5128a62
--- /dev/null
+++ b/assets/helppage.png
Binary files differ
diff --git a/docs/tutorial_rp.md b/docs/tutorial_rp.md
index 9053ffb..d670745 100644
--- a/docs/tutorial_rp.md
+++ b/docs/tutorial_rp.md
@@ -62,6 +62,8 @@ In case you're running llama.cpp, here is an example of starting the llama.cpp s
**After changing config.toml or environment variables, you need to restart the program.**
+`Ctrl+C` to close the program and `make` to rebuild and start it again.
+
For roleplay, /completion endpoints are much better, since /chat endpoints swap any character name to either `user` or `assistant`.
Once you have the desired API endpoint
(for example: http://localhost:8080/completion),