Commit Graph

232 Commits

Author SHA1 Message Date
Matt Low 8ca044b6af Update README.md 2024-06-23 21:42:23 +00:00
Matt Low 6f5cf68208 Update TODO.md 2024-06-23 21:42:23 +00:00
Matt Low 914d9ac0c1 Renamed RequestParameters.ToolBag to Toolbox 2024-06-23 19:10:03 +00:00
Matt Low 8ddac2f820 Introduce "agents"
An agent is currently a name given to a system prompt and a set of
tools which the agent has access to.

This resolves the previous issue of the set of configured tools being
available in *all* contexts, which wasn't always desired. Tools are now
only available when an agent is explicitly requested using the
`-a/--agent` flag.

Agents are expected to be expanded on: the concept of task-specilized
agents (e.g. coding), the ability to define a set of files an agent
should always have access to for RAG purposes, etc.

Other changes:

- Removes the "tools" top-level config structure (though this is expected
to come back along with the abillity to define custom tools).

- Renamed `pkg/agent` to `pkg/agents`
2024-06-23 19:05:30 +00:00
Matt Low cea5118cac Only include user and assistant messages when generating titles
(again)
2024-06-23 18:55:04 +00:00
Matt Low a43a91c6ff Update system prompt handling (again)
Add `api.ApplySystemPrompt`, renamed `GetSystemPrompt` to
`DefaultSystemPrompt`.
2024-06-23 18:36:51 +00:00
Matt Low ba7018af11 Update config handling
- Stop using pointers where unnecessary
- Removed default system prompt
- Set indent level to 2 when writing config
- Update ordering of config struct, which affects marshalling
- Make provider `name` optional, defaulting to the provider's `kind`
2024-06-23 16:28:21 +00:00
Matt Low f89cc7b410 Add validation to command line flags + update system prompt handling
Renamed `applyPromptFlags` to `applyGenerationFlags` and added
`validateGenerationFlags`
2024-06-23 06:08:15 +00:00
Matt Low 677cfcfebf Slight cleanup to openai
Remove /v1 from base url, removed some slight repetition
2024-06-23 04:17:53 +00:00
Matt Low 11402c5534 Update to yaml.v3
Bonus: better rendering of tool results in the chat
2024-06-23 04:04:01 +00:00
Matt Low a1fc8a637b Include types in provider files, splitting wasn't ergonomic 2024-06-23 01:48:31 +00:00
Matt Low 94d84ba7d7 Support Anthropic's native tool calling API 2024-06-23 01:47:31 +00:00
Matt Low c50b6b154d Add TODO.md 2024-06-21 12:56:11 -06:00
Matt Low 31df055430 Always show tool calls, toggle whether results are hidden 2024-06-21 06:05:00 +00:00
Matt Low c30e652103 Cleaned up assistant cursor handling 2024-06-21 05:52:59 +00:00
Matt Low 3fde58b77d Package restructure and API changes, several fixes
- More emphasis on `api` package. It now holds database model structs
  from `lmcli/models` (which is now gone) as well as the tool spec,
  call, and result types. `tools.Tool` is now `api.ToolSpec`.
  `api.ChatCompletionClient` was renamed to
  `api.ChatCompletionProvider`.

- Change ChatCompletion interface and implementations to no longer do
  automatic tool call recursion - they simply return a ToolCall message
  which the caller can decide what to do with (e.g. prompt for user
  confirmation before executing)

- `api.ChatCompletionProvider` functions have had their ReplyCallback
  parameter removed, as now they only return a single reply.

- Added a top-level `agent` package, moved the current built-in tools
  implementations under `agent/toolbox`. `tools.ExecuteToolCalls` is now
  `agent.ExecuteToolCalls`.

- Fixed request context handling in openai, google, ollama (use
  `NewRequestWithContext`), cleaned up request cancellation in TUI

- Fix tool call tui persistence bug (we were skipping message with empty
  content)

- Now handle tool calling from TUI layer

TODO:
- Prompt users before executing tool calls
- Automatically send tool results to the model (or make this toggleable)
2024-06-21 05:24:02 +00:00
Matt Low 85a2abbbf3 Use @ as the separator between model and provider
Also put the provider after the model (e.g. `gpt-4o@openai`,
`openai/gpt-4o@openrouter`. The aim here was to reduce clashing and
confusion with existing model naming conventions.
2024-06-20 23:37:11 +00:00
Matt Low dfe43179c0 Include token count in api.Chunk
And calculate the tokens/chunk for gemini responses, fixing the tok/s
meter for gemini models.

Further, only consider the first candidate of streamed gemini responses.
2024-06-09 20:49:18 +00:00
Matt Low 42c3297e54 Make Conversation a pointer refernece on Message
Instead of a value, which lead some odd handling of conversation
references.

Also fixed some formatting and removed an unnecessary (and probably
broken) setting of ConversationID in a call to
`cmdutil.HandleConversationReply`
2024-06-09 18:51:44 +00:00
Matt Low a22119f738 Better handling of newly saved conversations
When a new conversation is created in the chat view's
`persistConversation`, we now set `rootMessages` appropriately.
2024-06-09 18:51:44 +00:00
Matt Low a2c860252f Refactor pkg/lmcli/provider
Moved `ChangeCompletionInterface` to `pkg/api`, moved individual
providers to `pkg/api/provider`
2024-06-09 18:31:43 +00:00
Matt Low d2d946b776 Wrap chunk content in a Chunk type
Preparing to include additional information with each chunk (e.g. token
count)
2024-06-09 18:31:43 +00:00
Matt Low c963747066 Store fixes
We were taking double pointers (`**T`) in some areas, and in
we were not setting foreign references correctly in `StartConversation`
and `Reply`.
2024-06-09 18:31:40 +00:00
Matt Low e334d9fc4f Remove forgotten printf 2024-06-09 16:19:22 +00:00
Matt Low c1ead83939 Rename shared.State to shared.Shared 2024-06-09 16:19:19 +00:00
Matt Low c9e92e186e Chat view cleanup
Replace `waitingForReply` and the `status` string with the `state`
variable.
2024-06-09 16:19:17 +00:00
Matt Low 45df957a06 Fixes to message/conversation handling in tui chat view
This set of changes fixes root/child message cycling and ensures all
database operations happen within a `tea.Cmd`
2024-06-08 21:28:29 +00:00
Matt Low 136c463924 Split chat view into files 2024-06-02 22:40:46 +00:00
Matt Low 2580087b4d Fixed gemini system prompt handling 2024-06-02 22:37:50 +00:00
Matt Low 60a474d516 Implement PathToRoot and PathToLeaf with one query
After fetching all of a conversation's messages, we traverse the
message's Parent or SelectedReply fields to build the message "path"
in-memory
2024-06-01 06:40:59 +00:00
Matt Low ea576d24a6 Add Ollama support 2024-06-01 01:38:45 +00:00
Matt Low 465b1d333e Fixed handling of long (slash separated) and short model identifiers
Renamed `GetCompletionProvider` to `GetModelProvider` and update it to
return the model's short name (the one to use when making requests)
2024-05-30 19:06:18 +00:00
Matt Low b29a4c8b84 Fixed regression from 3536438d
We were sending an empty string to the output channel when `ping`
messages were received from Anthropic's API. This was causing the TUI to
break since we started doing an empty chunk check (and mistakenly not
waiting for future chunks if one was received).

This commit makes it so we no longer an empty string on the ping message
from Anthropic, and we update the handling of msgAssistantChunk and
msgAssistantReply to make it less likely that we forget to wait for the
next chunk/reply.
2024-05-30 18:58:03 +00:00
Matt Low 58e1b84fea Documentation tweak 2024-05-30 18:24:01 +00:00
Matt Low a6522dbcd0 Generate title prompt tweak 2024-05-30 18:24:01 +00:00
Matt Low 97cd047861 Cleaned up tui view switching 2024-05-30 07:18:31 +00:00
Matt Low ed784bb1cf Clean up tui View handling 2024-05-30 07:05:08 +00:00
Matt Low c1792f27ff Split up tui code into packages (views/*, shared, util) 2024-05-30 06:44:40 +00:00
Matt Low 0ad698a942 Update GenerateTitle
Show conversation and expect result back in JSON
2024-05-28 07:37:09 +00:00
Matt Low 0d66a49997 Add ability to cycle through conversation branches in tui 2024-05-28 06:34:11 +00:00
Matt Low 008fdc0d37 Update title generation prompt 2024-05-23 06:01:30 +00:00
Matt Low eec9eb41e9 Tiny formatting fix 2024-05-23 05:53:13 +00:00
Matt Low 437997872a Improve message wrapping behavior 2024-05-22 16:57:52 +00:00
Matt Low 3536438dd1 Add cursor to indicate the assistant is responding
A better/more natural indication that the model is doing something
2024-05-22 16:25:16 +00:00
Matt Low f5ce970102 Set default retry offset to 0 2024-05-21 00:13:56 +00:00
Matt Low 5c1248184b Update dir_tree to have maximum depth of 5
Until we have some mechanism in place for confirming tool calls with the
user before executing, it's dangerous to allow unlimited depth
2024-05-21 00:08:42 +00:00
Matt Low 8c53752146 Add message branching
Updated the behaviour of commands:

- `lmcli edit`
  - by default create a new branch/message branch with the edited contents
  - add --in-place to avoid creating a branch
  - no longer delete messages after the edited message
  - only do the edit, don't fetch a new response
- `lmcli retry`
  - create a new branch rather than replacing old messages
  - add --offset to change where to retry from
2024-05-20 22:29:51 +00:00
Matt Low f6e55f6bff `lmcli chat`: check that conversation exists 2024-05-20 16:07:38 +00:00
Matt Low dc1edf8c3e Split google API types into types.go 2024-05-19 21:50:43 +00:00
Matt Low 62d98289e8 Fix for non-streamed gemini responses 2024-05-19 02:59:43 +00:00