Matt Low
12d4e495d4
tui: add focus switching between input/messages view
2024-03-17 22:55:02 +00:00
Matt Low
d8c8262890
tui: removed confirm before send, dynamic footer
...
footer now rendered based on model data, instead of being set to a fixed
string
2024-03-17 22:55:02 +00:00
Matt Low
758f74aba5
tui: use ctx chroma highlighter
2024-03-17 22:55:02 +00:00
Matt Low
1570c23d63
Add initial TUI
2024-03-17 22:55:02 +00:00
Matt Low
46149e0b67
Attempt to fix anthropic tool calling
...
Models have been way too eager to use tools when the task does not
require it (for example, reading the filesystem in order to show an
code example)
2024-03-17 22:55:02 +00:00
Matt Low
c2c61e2aaa
Improve title generation prompt performance
...
The previous prompt was utterly broken with Anthropic models, they would
just try to continue the conversation
2024-03-17 22:55:02 +00:00
Matt Low
5e880d3b31
Lead anthropic function call XML with newline
2024-03-17 22:55:02 +00:00
Matt Low
62f07dd240
Fix double reply callback on tool calls
2024-03-17 22:55:02 +00:00
Matt Low
ec1f326c2a
Add store.AddReply
2024-03-14 06:01:42 +00:00
Matt Low
db116660a5
Removed tool usage logging to stdout
2024-03-14 06:01:42 +00:00
Matt Low
32eab7aa35
Update anthropic function/tool calling
...
Strip the function call XML from the returned/saved content, which
should allow for model switching between openai/anthropic (and
others?) within the same conversation involving tool calls.
This involves reconstructing the function call XML when sending requests
to anthropic
2024-03-12 20:54:02 +00:00
Matt Low
91d3c9c2e1
Update ChatCompletionClient
...
Instead of CreateChatCompletion* accepting a pointer to a slice of reply
messages, it accepts a callback which is called with each successive
reply the conversation.
This gives the caller more flexibility in how it handles replies (e.g.
it can react to them immediately now, instead of waiting for the entire
call to finish)
2024-03-12 20:39:34 +00:00
Matt Low
8bdb155bf7
Update ChatCompletionClient to accept context.Context
2024-03-12 18:24:46 +00:00
Matt Low
045146bb5c
Moved flag
2024-03-12 08:03:04 +00:00
Matt Low
2c7bdd8ebf
Store enabled tools in lmcli.Context
2024-03-12 08:01:53 +00:00
Matt Low
7d56726c78
Add --model flag completion
2024-03-12 07:43:57 +00:00
Matt Low
f2c7d2bdd0
Store ChromaHighlighter in lmcli.Context and use it
...
In preparation for TUI
2024-03-12 07:43:40 +00:00
Matt Low
0a27b9a8d3
Project refactor, add anthropic API support
...
- Split pkg/cli/cmd.go into new pkg/cmd package
- Split pkg/cli/functions.go into pkg/lmcli/tools package
- Refactor pkg/cli/openai.go to pkg/lmcli/provider/openai
Other changes:
- Made models configurable
- Slight config reorganization
2024-03-12 01:01:19 -06:00
Matt Low
2611663168
Add --count flag to list command, lower default from 25 to 5
2024-02-22 05:07:16 +00:00
Matt Low
120e61e88b
Fixed variable shadowing bug in ls command
2024-02-22 05:00:46 +00:00
Matt Low
fa966d30db
Update README.md
2024-01-11 10:27:11 -07:00
Matt Low
51ce74ad3a
Add --offset flag to edit command
2024-01-09 18:10:05 +00:00
Matt Low
b93ee94233
Rename `lsCmd` to `listCmd`, add `ls` as an alias
2024-01-03 17:45:02 +00:00
Matt Low
db788760a3
Adjust help messages
2024-01-03 17:27:58 +00:00
Matt Low
242ed886ec
Show lmcli usage by default
2024-01-03 17:27:58 +00:00
Matt Low
02a23b9035
Add clone command
...
Used RunE instead of Run, make adjustments to rootCmd so that we control
how error messages are printed (in main())
2024-01-03 17:26:57 +00:00
Matt Low
b3913d0027
Add limit to number of conversations shown by default by `lmcli ls`
2024-01-03 17:26:09 +00:00
Matt Low
1184f9aaae
Changed how conversations are grouped by age in `lmcli ls`
2024-01-03 17:26:09 +00:00
Matt Low
a25d0d95e8
Don't export some additional functions, rename slightly
2024-01-03 17:24:52 +00:00
Matt Low
becaa5c7c0
Redo flag descriptions
2024-01-03 05:50:16 +00:00
Matt Low
239ded18f3
Add edit command
...
Various refactoring:
- reduced repetition with conversation message handling
- made some functions internal
2024-01-02 04:31:21 +00:00
Matt Low
59e78669c8
Fix CreateChatCompletion
...
Don't double-append toolReplies
2023-12-06 05:51:14 +00:00
Matt Low
1966ec881b
Make `lmcli rm` allow removing multiple conversations
2023-12-06 05:51:14 +00:00
Matt Low
f6ded3e20e
Update README
2023-11-29 15:38:48 +00:00
Matt Low
1e8ff60c54
Add `lmcli rename` to rename conversations
2023-11-29 15:33:25 +00:00
Matt Low
af2fccd4ee
Fix README errors
2023-11-29 15:33:25 +00:00
Matt Low
f206334e72
Use MessageRole constants elsewhere
2023-11-29 05:57:38 +00:00
Matt Low
5615051637
Improve config handling
...
- Backup existing config if we're saving it to add configuration
defaults
- Output messages when saving/backing up the configuration file
2023-11-29 05:54:05 +00:00
Matt Low
c46500de4e
Update README.md features
2023-11-29 05:45:03 +00:00
Matt Low
d5dde10dbf
Add tools section to README.md
2023-11-29 05:39:37 +00:00
Matt Low
d32e9421fe
Add openai.enabledTools config key
...
By default none are, they must be explicitly enabled by the user adding
the configuration.
2023-11-29 05:27:58 +00:00
Matt Low
e29dbaf2a3
Code deduplication
2023-11-29 05:15:32 +00:00
Matt Low
c64bc370f4
Don't include system message when generating conversation title
2023-11-29 04:51:38 +00:00
Matt Low
4f37ed046b
Delete 'retried' messages in `lmcli retry`
2023-11-29 04:50:45 +00:00
Matt Low
ed6ee9bea9
Add *Message[] parameter to CreateChatCompletion methods
...
Allows replies (tool calls, user-facing messges) to be added in sequence
as CreateChatCompleion* recurses into itself.
Cleaned up cmd.go: no longer need to create a Message based on the
string content response.
2023-11-29 04:43:53 +00:00
Matt Low
e850c340b7
Add initial support for tool/function calling
...
Adds the following tools:
- read_dir - list a directory's contents
- read_file - read the content of a file
- write_file - write contents to a file
- insert_file_lines - insert lines in a file
- replace_file_lines - replace or remove lines in a file
2023-11-27 05:26:20 +00:00
Matt Low
1e63c09907
Update prompt used to generate conversation title
2023-11-27 05:21:41 +00:00
Matt Low
2f3d95356a
Be explicit with openai response choices limit (`n` parameter)
2023-11-25 13:39:52 -07:00
Matt Low
137c568129
Minor cleanup
2023-11-25 01:26:37 +00:00
Matt Low
c02b21ca37
Refactor the last refactor :)
...
Removed HandlePartialResponse, add LLMRequest which handles all common
logic of making LLM requests and returning/showing their response.
2023-11-24 15:17:24 +00:00