- More emphasis on `api` package. It now holds database model structs
from `lmcli/models` (which is now gone) as well as the tool spec,
call, and result types. `tools.Tool` is now `api.ToolSpec`.
`api.ChatCompletionClient` was renamed to
`api.ChatCompletionProvider`.
- Change ChatCompletion interface and implementations to no longer do
automatic tool call recursion - they simply return a ToolCall message
which the caller can decide what to do with (e.g. prompt for user
confirmation before executing)
- `api.ChatCompletionProvider` functions have had their ReplyCallback
parameter removed, as now they only return a single reply.
- Added a top-level `agent` package, moved the current built-in tools
implementations under `agent/toolbox`. `tools.ExecuteToolCalls` is now
`agent.ExecuteToolCalls`.
- Fixed request context handling in openai, google, ollama (use
`NewRequestWithContext`), cleaned up request cancellation in TUI
- Fix tool call tui persistence bug (we were skipping message with empty
content)
- Now handle tool calling from TUI layer
TODO:
- Prompt users before executing tool calls
- Automatically send tool results to the model (or make this toggleable)
Updated the behaviour of commands:
- `lmcli edit`
- by default create a new branch/message branch with the edited contents
- add --in-place to avoid creating a branch
- no longer delete messages after the edited message
- only do the edit, don't fetch a new response
- `lmcli retry`
- create a new branch rather than replacing old messages
- add --offset to change where to retry from
- Split pkg/cli/cmd.go into new pkg/cmd package
- Split pkg/cli/functions.go into pkg/lmcli/tools package
- Refactor pkg/cli/openai.go to pkg/lmcli/provider/openai
Other changes:
- Made models configurable
- Slight config reorganization