9cd28d28d7
tui: renamed uiCache to views, cleanup
2024-03-29 20:56:39 +00:00
0b991800d6
tui: dynamic input textarea height and styling updates
...
Maintain a height of 4 up to half of the main content area
Add rounded border
2024-03-29 20:00:28 +00:00
5af857edae
tui: truncate title to width
2024-03-29 15:48:50 +00:00
3e24a54d0a
tui: add border above input
2024-03-28 06:53:39 +00:00
a669313a0b
tui: add tool rendering
...
cleaned up message rendering and changed cache semantics
other smaller tweaks
2024-03-26 08:06:46 +00:00
6310021dca
tui: improve footer truncation
2024-03-23 04:08:48 +00:00
ef929da68c
tui: add uiCache
...
Clean up/fix how we calculate the height of the content viewport
2024-03-23 03:55:20 +00:00
c51644e78e
Add dir_tree tool
2024-03-22 20:30:34 +00:00
91c74d9e1e
Update CreateChatCompletion behavior
...
When the last message in the passed messages slice is an assistant
message, treat it as a partial message that is being continued, and
include its content in the newly created reply
Update TUI code to handle new behavior
2024-03-22 20:02:28 +00:00
3185b2d7d6
tui: show the message position when focused
2024-03-17 22:55:02 +00:00
6c64f21d9a
tui: support for message retry/continue
...
Better handling of persistence, and we now ensure the response we
persist is trimmed of whitespace, particularly important when a response
is cancelled mid-stream
2024-03-17 22:55:02 +00:00
6f737ad19c
tui: handle text wrapping ourselves, add ctrl+w wrap toggle
...
Gets rid of those pesky trailing characters
2024-03-17 22:55:02 +00:00
a8ffdc156a
tui: open input/messages for editing in $EDITOR
2024-03-17 22:55:02 +00:00
7a974d9764
tui: add ability to select a message
2024-03-17 22:55:02 +00:00
adb61ffa59
tui: conversation rendering tweaks, remove input character limit
2024-03-17 22:55:02 +00:00
1c7ad75fd5
tui: fixed response cancelling
2024-03-17 22:55:02 +00:00
613aa1a552
tui: ctrl+r to retry previous message
2024-03-17 22:55:02 +00:00
71833b89cd
tui: fixed footer styling
2024-03-17 22:55:02 +00:00
2ad93394b1
tui: removed scrollbar
2024-03-17 22:55:02 +00:00
f49b772960
tui: minor fixed and cleanup
2024-03-17 22:55:02 +00:00
29d8138dc0
tui: update lodos
2024-03-17 22:55:02 +00:00
3756f6d9e4
tui: add response waiting spinner
2024-03-17 22:55:02 +00:00
41916eb7b3
tui: add LLM response error handling
...
+ various other small tweaks
2024-03-17 22:55:02 +00:00
3892e68251
tui: add a "scroll bar" and error view
2024-03-17 22:55:02 +00:00
8697284064
tui: generate titles for conversations
2024-03-17 22:55:02 +00:00
383d34f311
tui: persist new conversations as well
2024-03-17 22:55:02 +00:00
ac0e380244
tui: add reply persistence
2024-03-17 22:55:02 +00:00
c3a3cb0181
tui: improve footer rendering
...
Made it easier to add segmemts later, better handling of padding
2024-03-17 22:55:02 +00:00
612ea90417
tui: slight function order change
2024-03-17 22:55:02 +00:00
94508b1dbf
tui: cache highlighted messages
...
Syntax highlighting is fairly expensive, and this means we no longer
need to do syntax highlighting on the entire conversaion each time a new
message chunk is received
2024-03-17 22:55:02 +00:00
7e002e5214
tui: adjust message header styling
2024-03-17 22:55:02 +00:00
48e4dea3cf
tui: style tweaks
2024-03-17 22:55:02 +00:00
0ab552303d
tui: add contentStyle, applied to overall viewport content
2024-03-17 22:55:02 +00:00
6ce42a77f9
tui: update TODO
2024-03-17 22:55:02 +00:00
2cb1a0005d
tui: fix conversation loading
2024-03-17 22:55:02 +00:00
ea78edf039
tui: use EnabledTools from lmcli.Context
2024-03-17 22:55:02 +00:00
793aaab50e
tui: styling tweak
2024-03-17 22:55:02 +00:00
5afc9667c7
tui: add header with title
2024-03-17 22:55:02 +00:00
dfafc573e5
tui: handle multi part responses
2024-03-17 22:55:02 +00:00
97f81a0cbb
tui: scroll content view with output
...
clean up msgResponseChunk handling
2024-03-17 22:55:02 +00:00
eca120cde6
tui: ability to cancel request in flight
2024-03-17 22:55:02 +00:00
12d4e495d4
tui: add focus switching between input/messages view
2024-03-17 22:55:02 +00:00
d8c8262890
tui: removed confirm before send, dynamic footer
...
footer now rendered based on model data, instead of being set to a fixed
string
2024-03-17 22:55:02 +00:00
758f74aba5
tui: use ctx chroma highlighter
2024-03-17 22:55:02 +00:00
1570c23d63
Add initial TUI
2024-03-17 22:55:02 +00:00
46149e0b67
Attempt to fix anthropic tool calling
...
Models have been way too eager to use tools when the task does not
require it (for example, reading the filesystem in order to show an
code example)
2024-03-17 22:55:02 +00:00
c2c61e2aaa
Improve title generation prompt performance
...
The previous prompt was utterly broken with Anthropic models, they would
just try to continue the conversation
2024-03-17 22:55:02 +00:00
5e880d3b31
Lead anthropic function call XML with newline
2024-03-17 22:55:02 +00:00
62f07dd240
Fix double reply callback on tool calls
2024-03-17 22:55:02 +00:00
ec1f326c2a
Add store.AddReply
2024-03-14 06:01:42 +00:00
db116660a5
Removed tool usage logging to stdout
2024-03-14 06:01:42 +00:00
32eab7aa35
Update anthropic function/tool calling
...
Strip the function call XML from the returned/saved content, which
should allow for model switching between openai/anthropic (and
others?) within the same conversation involving tool calls.
This involves reconstructing the function call XML when sending requests
to anthropic
2024-03-12 20:54:02 +00:00
91d3c9c2e1
Update ChatCompletionClient
...
Instead of CreateChatCompletion* accepting a pointer to a slice of reply
messages, it accepts a callback which is called with each successive
reply the conversation.
This gives the caller more flexibility in how it handles replies (e.g.
it can react to them immediately now, instead of waiting for the entire
call to finish)
2024-03-12 20:39:34 +00:00
8bdb155bf7
Update ChatCompletionClient to accept context.Context
2024-03-12 18:24:46 +00:00
045146bb5c
Moved flag
2024-03-12 08:03:04 +00:00
2c7bdd8ebf
Store enabled tools in lmcli.Context
2024-03-12 08:01:53 +00:00
7d56726c78
Add --model flag completion
2024-03-12 07:43:57 +00:00
f2c7d2bdd0
Store ChromaHighlighter in lmcli.Context and use it
...
In preparation for TUI
2024-03-12 07:43:40 +00:00
0a27b9a8d3
Project refactor, add anthropic API support
...
- Split pkg/cli/cmd.go into new pkg/cmd package
- Split pkg/cli/functions.go into pkg/lmcli/tools package
- Refactor pkg/cli/openai.go to pkg/lmcli/provider/openai
Other changes:
- Made models configurable
- Slight config reorganization
2024-03-12 01:01:19 -06:00
2611663168
Add --count flag to list command, lower default from 25 to 5
2024-02-22 05:07:16 +00:00
120e61e88b
Fixed variable shadowing bug in ls command
2024-02-22 05:00:46 +00:00
51ce74ad3a
Add --offset flag to edit command
2024-01-09 18:10:05 +00:00
b93ee94233
Rename lsCmd
to listCmd
, add ls
as an alias
2024-01-03 17:45:02 +00:00
db788760a3
Adjust help messages
2024-01-03 17:27:58 +00:00
242ed886ec
Show lmcli usage by default
2024-01-03 17:27:58 +00:00
02a23b9035
Add clone command
...
Used RunE instead of Run, make adjustments to rootCmd so that we control
how error messages are printed (in main())
2024-01-03 17:26:57 +00:00
b3913d0027
Add limit to number of conversations shown by default by lmcli ls
2024-01-03 17:26:09 +00:00
1184f9aaae
Changed how conversations are grouped by age in lmcli ls
2024-01-03 17:26:09 +00:00
a25d0d95e8
Don't export some additional functions, rename slightly
2024-01-03 17:24:52 +00:00
becaa5c7c0
Redo flag descriptions
2024-01-03 05:50:16 +00:00
239ded18f3
Add edit command
...
Various refactoring:
- reduced repetition with conversation message handling
- made some functions internal
2024-01-02 04:31:21 +00:00
59e78669c8
Fix CreateChatCompletion
...
Don't double-append toolReplies
2023-12-06 05:51:14 +00:00
1966ec881b
Make lmcli rm
allow removing multiple conversations
2023-12-06 05:51:14 +00:00
1e8ff60c54
Add lmcli rename
to rename conversations
2023-11-29 15:33:25 +00:00
f206334e72
Use MessageRole constants elsewhere
2023-11-29 05:57:38 +00:00
5615051637
Improve config handling
...
- Backup existing config if we're saving it to add configuration
defaults
- Output messages when saving/backing up the configuration file
2023-11-29 05:54:05 +00:00
d32e9421fe
Add openai.enabledTools config key
...
By default none are, they must be explicitly enabled by the user adding
the configuration.
2023-11-29 05:27:58 +00:00
e29dbaf2a3
Code deduplication
2023-11-29 05:15:32 +00:00
c64bc370f4
Don't include system message when generating conversation title
2023-11-29 04:51:38 +00:00
4f37ed046b
Delete 'retried' messages in lmcli retry
2023-11-29 04:50:45 +00:00
ed6ee9bea9
Add *Message[] parameter to CreateChatCompletion methods
...
Allows replies (tool calls, user-facing messges) to be added in sequence
as CreateChatCompleion* recurses into itself.
Cleaned up cmd.go: no longer need to create a Message based on the
string content response.
2023-11-29 04:43:53 +00:00
e850c340b7
Add initial support for tool/function calling
...
Adds the following tools:
- read_dir - list a directory's contents
- read_file - read the content of a file
- write_file - write contents to a file
- insert_file_lines - insert lines in a file
- replace_file_lines - replace or remove lines in a file
2023-11-27 05:26:20 +00:00
1e63c09907
Update prompt used to generate conversation title
2023-11-27 05:21:41 +00:00
2f3d95356a
Be explicit with openai response choices limit (n
parameter)
2023-11-25 13:39:52 -07:00
137c568129
Minor cleanup
2023-11-25 01:26:37 +00:00
c02b21ca37
Refactor the last refactor :)
...
Removed HandlePartialResponse, add LLMRequest which handles all common
logic of making LLM requests and returning/showing their response.
2023-11-24 15:17:24 +00:00
6249fbc8f8
Refactor streamed response handling
...
Update CreateChangeCompletionStream to return the entire response upon
stream completion. Renamed HandleDelayedResponse to
HandleDelayedContent, which no longer returns the content.
Removes the need wrapping HandleDelayedContent in an immediately invoked
function and the passing of the completed response over a channel. Also
allows us to better handle the case of partial a response.
2023-11-24 03:45:43 +00:00
a2bd911ac8
Add retry
and continue
commands
2023-11-22 06:53:22 +00:00
cb9e27542e
Add --system-prompt and --system-prompt-file flags
...
These allow to set a different system prompt for conversations and
one-shot prompts.
Also add a new `modelDefaults.systemPrompt` configuration key to define
the default system prompt, which can be overriden per-execution with the
--system-prompt or --system-prompt-file flags.
2023-11-22 04:45:06 +00:00
db27a22347
Removed 'get' prefix from DataDir() and ConfigDir()
2023-11-22 03:17:13 +00:00
c8a1e3e105
Allow message input from either args or editor on all relevant commands
...
Those (sub-)commands being: `new`, `reply`, and `prompt`
2023-11-20 16:50:56 +00:00
b5f066ff34
Increase max token length for conversation title generation
2023-11-20 03:48:32 +00:00
e6dcefacf5
Add syntax highlighting
2023-11-19 05:00:59 +00:00
8780856854
Set config defaults using a "default" struct tag
...
Add new SetStructDefaults function to handle the "defaults" struct tag.
Only works on struct fields which are pointers (in order to be able to
distinguish between not set (nil) and zero values). So, the Config
struct has been updated to use pointer fields and we now need to
dereference those pointers to use them.
2023-11-19 04:37:14 +00:00
6426b04e2c
Add RenderConversation to split out common message rendering logic
2023-11-18 16:17:13 +00:00
965043c908
Add --model flag to control which language model to use
2023-11-18 16:17:13 +00:00
8bc8312154
Add --length flag to control model output "maxTokens"
2023-11-18 16:17:13 +00:00
681b52a55c
Handle empty reply
2023-11-18 16:17:13 +00:00
22e0ff4115
Alter format and add colouring to user/role message headings
2023-11-18 16:16:46 +00:00
6599af042b
Minor refactor
...
- Use init() function to set up commands
- Expose an Execute() function instead of the root command
2023-11-14 17:04:12 +00:00