Replaced `LatestConversationMessages` with `LoadConversationList`, which
utilizes `LastMessageAt` for much faster conversation loading in the
conversation listing TUI and `lmcli list` command.
This refactor splits out all conversation concerns into a new
`conversation` package. There is now a split between `conversation` and
`api`s representation of `Message`, the latter storing the minimum
information required for interaction with LLM providers. There is
necessary conversation between the two when making LLM calls.
Adjusted `ctrl+t` in chat view to toggle `showDetails` which toggles the
display of system messages, message metadata (generation model), and
tool call details
Modified message selection update logic to skip messages that aren't
shown
Improves render handling by moving the responsibility of laying out the
whole UI from each view and into the main `tui` model. Our `ViewModel`
interface has now diverged from bubbletea's `Model` and introduces
individual `Header`, `Content`, and `Footer` methods for rendering those
UI elements.
Also moved away from using value receivers on our Update and View
functions (as is common across Bubbletea) to pointer receivers, which
cleaned up some of the weirder aspects of the code (e.g. before we
essentially had no choice but to do our rendering in `Update` in order
to calculate and update the final height of the main content's
`viewport`).
`tui/tui.go` is no longer responsible for passing window resize updates
to all views, instead we request a new window size message to be sent at
the same time we enter the view, allowing the view to catch and handle
it.
Add `Initialized` to `tui/shared/View` model, now we only call
`Init` on a view before entering it for the first time, rather than
calling `Init` on all views when the application starts.
Renames file, small cleanups
An agent is currently a name given to a system prompt and a set of
tools which the agent has access to.
This resolves the previous issue of the set of configured tools being
available in *all* contexts, which wasn't always desired. Tools are now
only available when an agent is explicitly requested using the
`-a/--agent` flag.
Agents are expected to be expanded on: the concept of task-specilized
agents (e.g. coding), the ability to define a set of files an agent
should always have access to for RAG purposes, etc.
Other changes:
- Removes the "tools" top-level config structure (though this is expected
to come back along with the abillity to define custom tools).
- Renamed `pkg/agent` to `pkg/agents`
- More emphasis on `api` package. It now holds database model structs
from `lmcli/models` (which is now gone) as well as the tool spec,
call, and result types. `tools.Tool` is now `api.ToolSpec`.
`api.ChatCompletionClient` was renamed to
`api.ChatCompletionProvider`.
- Change ChatCompletion interface and implementations to no longer do
automatic tool call recursion - they simply return a ToolCall message
which the caller can decide what to do with (e.g. prompt for user
confirmation before executing)
- `api.ChatCompletionProvider` functions have had their ReplyCallback
parameter removed, as now they only return a single reply.
- Added a top-level `agent` package, moved the current built-in tools
implementations under `agent/toolbox`. `tools.ExecuteToolCalls` is now
`agent.ExecuteToolCalls`.
- Fixed request context handling in openai, google, ollama (use
`NewRequestWithContext`), cleaned up request cancellation in TUI
- Fix tool call tui persistence bug (we were skipping message with empty
content)
- Now handle tool calling from TUI layer
TODO:
- Prompt users before executing tool calls
- Automatically send tool results to the model (or make this toggleable)
And calculate the tokens/chunk for gemini responses, fixing the tok/s
meter for gemini models.
Further, only consider the first candidate of streamed gemini responses.
Instead of a value, which lead some odd handling of conversation
references.
Also fixed some formatting and removed an unnecessary (and probably
broken) setting of ConversationID in a call to
`cmdutil.HandleConversationReply`
We were sending an empty string to the output channel when `ping`
messages were received from Anthropic's API. This was causing the TUI to
break since we started doing an empty chunk check (and mistakenly not
waiting for future chunks if one was received).
This commit makes it so we no longer an empty string on the ping message
from Anthropic, and we update the handling of msgAssistantChunk and
msgAssistantReply to make it less likely that we forget to wait for the
next chunk/reply.