Commit Graph

90 Commits

Author SHA1 Message Date
dfafc573e5 tui: handle multi part responses 2024-03-17 22:55:02 +00:00
97f81a0cbb tui: scroll content view with output
clean up msgResponseChunk handling
2024-03-17 22:55:02 +00:00
eca120cde6 tui: ability to cancel request in flight 2024-03-17 22:55:02 +00:00
12d4e495d4 tui: add focus switching between input/messages view 2024-03-17 22:55:02 +00:00
d8c8262890 tui: removed confirm before send, dynamic footer
footer now rendered based on model data, instead of being set to a fixed
string
2024-03-17 22:55:02 +00:00
758f74aba5 tui: use ctx chroma highlighter 2024-03-17 22:55:02 +00:00
1570c23d63 Add initial TUI 2024-03-17 22:55:02 +00:00
46149e0b67 Attempt to fix anthropic tool calling
Models have been way too eager to use tools when the task does not
require it (for example, reading the filesystem in order to show an
code example)
2024-03-17 22:55:02 +00:00
c2c61e2aaa Improve title generation prompt performance
The previous prompt was utterly broken with Anthropic models, they would
just try to continue the conversation
2024-03-17 22:55:02 +00:00
5e880d3b31 Lead anthropic function call XML with newline 2024-03-17 22:55:02 +00:00
62f07dd240 Fix double reply callback on tool calls 2024-03-17 22:55:02 +00:00
ec1f326c2a Add store.AddReply 2024-03-14 06:01:42 +00:00
db116660a5 Removed tool usage logging to stdout 2024-03-14 06:01:42 +00:00
32eab7aa35 Update anthropic function/tool calling
Strip the function call XML from the returned/saved content, which
should allow for model switching between openai/anthropic (and
others?) within the same conversation involving tool calls.

This involves reconstructing the function call XML when sending requests
to anthropic
2024-03-12 20:54:02 +00:00
91d3c9c2e1 Update ChatCompletionClient
Instead of CreateChatCompletion* accepting a pointer to a slice of reply
messages, it accepts a callback which is called with each successive
reply the conversation.

This gives the caller more flexibility in how it handles replies (e.g.
it can react to them immediately now, instead of waiting for the entire
call to finish)
2024-03-12 20:39:34 +00:00
8bdb155bf7 Update ChatCompletionClient to accept context.Context 2024-03-12 18:24:46 +00:00
045146bb5c Moved flag 2024-03-12 08:03:04 +00:00
2c7bdd8ebf Store enabled tools in lmcli.Context 2024-03-12 08:01:53 +00:00
7d56726c78 Add --model flag completion 2024-03-12 07:43:57 +00:00
f2c7d2bdd0 Store ChromaHighlighter in lmcli.Context and use it
In preparation for TUI
2024-03-12 07:43:40 +00:00
0a27b9a8d3 Project refactor, add anthropic API support
- Split pkg/cli/cmd.go into new pkg/cmd package
- Split pkg/cli/functions.go into pkg/lmcli/tools package
- Refactor pkg/cli/openai.go to pkg/lmcli/provider/openai

Other changes:

- Made models configurable
- Slight config reorganization
2024-03-12 01:01:19 -06:00
2611663168 Add --count flag to list command, lower default from 25 to 5 2024-02-22 05:07:16 +00:00
120e61e88b Fixed variable shadowing bug in ls command 2024-02-22 05:00:46 +00:00
51ce74ad3a Add --offset flag to edit command 2024-01-09 18:10:05 +00:00
b93ee94233 Rename lsCmd to listCmd, add ls as an alias 2024-01-03 17:45:02 +00:00
db788760a3 Adjust help messages 2024-01-03 17:27:58 +00:00
242ed886ec Show lmcli usage by default 2024-01-03 17:27:58 +00:00
02a23b9035 Add clone command
Used RunE instead of Run, make adjustments to rootCmd so that we control
how error messages are printed (in main())
2024-01-03 17:26:57 +00:00
b3913d0027 Add limit to number of conversations shown by default by lmcli ls 2024-01-03 17:26:09 +00:00
1184f9aaae Changed how conversations are grouped by age in lmcli ls 2024-01-03 17:26:09 +00:00
a25d0d95e8 Don't export some additional functions, rename slightly 2024-01-03 17:24:52 +00:00
becaa5c7c0 Redo flag descriptions 2024-01-03 05:50:16 +00:00
239ded18f3 Add edit command
Various refactoring:
- reduced repetition with conversation message handling
- made some functions internal
2024-01-02 04:31:21 +00:00
59e78669c8 Fix CreateChatCompletion
Don't double-append toolReplies
2023-12-06 05:51:14 +00:00
1966ec881b Make lmcli rm allow removing multiple conversations 2023-12-06 05:51:14 +00:00
1e8ff60c54 Add lmcli rename to rename conversations 2023-11-29 15:33:25 +00:00
f206334e72 Use MessageRole constants elsewhere 2023-11-29 05:57:38 +00:00
5615051637 Improve config handling
- Backup existing config if we're saving it to add configuration
  defaults
- Output messages when saving/backing up the configuration file
2023-11-29 05:54:05 +00:00
d32e9421fe Add openai.enabledTools config key
By default none are, they must be explicitly enabled by the user adding
the configuration.
2023-11-29 05:27:58 +00:00
e29dbaf2a3 Code deduplication 2023-11-29 05:15:32 +00:00
c64bc370f4 Don't include system message when generating conversation title 2023-11-29 04:51:38 +00:00
4f37ed046b Delete 'retried' messages in lmcli retry 2023-11-29 04:50:45 +00:00
ed6ee9bea9 Add *Message[] parameter to CreateChatCompletion methods
Allows replies (tool calls, user-facing messges) to be added in sequence
as CreateChatCompleion* recurses into itself.

Cleaned up cmd.go: no longer need to create a Message based on the
string content response.
2023-11-29 04:43:53 +00:00
e850c340b7 Add initial support for tool/function calling
Adds the following tools:
- read_dir - list a directory's contents
- read_file - read the content of a file
- write_file - write contents to a file
- insert_file_lines - insert lines in a file
- replace_file_lines - replace or remove lines in a file
2023-11-27 05:26:20 +00:00
1e63c09907 Update prompt used to generate conversation title 2023-11-27 05:21:41 +00:00
2f3d95356a Be explicit with openai response choices limit (n parameter) 2023-11-25 13:39:52 -07:00
137c568129 Minor cleanup 2023-11-25 01:26:37 +00:00
c02b21ca37 Refactor the last refactor :)
Removed HandlePartialResponse, add LLMRequest which handles all common
logic of making LLM requests and returning/showing their response.
2023-11-24 15:17:24 +00:00
6249fbc8f8 Refactor streamed response handling
Update CreateChangeCompletionStream to return the entire response upon
stream completion. Renamed HandleDelayedResponse to
HandleDelayedContent, which no longer returns the content.

Removes the need wrapping HandleDelayedContent in an immediately invoked
function and the passing of the completed response over a channel. Also
allows us to better handle the case of partial a response.
2023-11-24 03:45:43 +00:00
a2bd911ac8 Add retry and continue commands 2023-11-22 06:53:22 +00:00