From 75f51c1a19632ea093e1aad6a91cc7f4a349658c Mon Sep 17 00:00:00 2001 From: Grail Finder Date: Wed, 22 Jan 2025 20:17:49 +0300 Subject: Feat: llamacpp /completion attempt --- README.md | 2 ++ 1 file changed, 2 insertions(+) (limited to 'README.md') diff --git a/README.md b/README.md index cb593fa..7de7558 100644 --- a/README.md +++ b/README.md @@ -40,6 +40,7 @@ - consider adding use /completion of llamacpp, since openai endpoint clearly has template|format issues; - change temp, min-p and other params from tui; - DRY; +- keybind to switch between openai and llamacpp endpoints; ### FIX: - bot responding (or hanging) blocks everything; + @@ -63,3 +64,4 @@ - number of sentences in a batch should depend on number of words there. + - F1 can load any chat, by loading chat of other agent it does not switch agents, if that chat is continued, it will rewrite agent in db; (either allow only chats from current agent OR switch agent on chat loading); + - after chat is deleted: load undeleted chat; + +- name split for llamacpp completion. user msg should end with 'bot_name:'; -- cgit v1.2.3