From 336451340b86ba1f713b47d44225df61058f5a8f Mon Sep 17 00:00:00 2001 From: Grail Finder Date: Wed, 29 Jan 2025 20:18:40 +0300 Subject: Feat: set/change props from tui for /completion --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) (limited to 'README.md') diff --git a/README.md b/README.md index 769fb7c..8e9db0f 100644 --- a/README.md +++ b/README.md @@ -36,11 +36,11 @@ - boolean flag to use/not use tools. I see it as a msg from a tool to an llm "Hey, it might be good idea to use me!"; - connection to a model status; - ===== /llamacpp specific (it has a different body -> interface instead of global var) -- edit syscards / create new ones; +- edit syscards; + - consider adding use /completion of llamacpp, since openai endpoint clearly has template|format issues; + -- change temp, min-p and other params from tui; +- change temp, min-p and other params from tui; + - DRY; + -- keybind to switch between openai and llamacpp endpoints; +- keybind to switch between openai and llamacpp endpoints (chat vs completion); - option to remove from chat history; - in chat management table add preview of the last message; + @@ -66,6 +66,6 @@ - number of sentences in a batch should depend on number of words there. + - F1 can load any chat, by loading chat of other agent it does not switch agents, if that chat is continued, it will rewrite agent in db; (either allow only chats from current agent OR switch agent on chat loading); + - after chat is deleted: load undeleted chat; + -- name split for llamacpp completion. user msg should end with 'bot_name:'; +- name split for llamacpp completion. user msg should end with 'bot_name:'; + - add retry on failed call (and EOF); - model info shold be an event and show disconnect status when fails; -- cgit v1.2.3