# TODO - [x] Strip anthropic XML function call scheme from content, to reconstruct when calling anthropic? - [x] `dir_tree` tool - [x] Implement native Anthropic API tool calling - [x] Agents - a name given to a system prompt + set of available tools + potentially other relevent data (e.g. external service credentials, files for RAG, etc), which the user explicitly selects (e.g. `lmcli chat --agent code-helper`, `lmcli chat -a financier`). - [ ] Specialized agents which have integrations beyond basic tool calling, e.g. a coding agent which bakes in efficient code context management (only the current state of relevant files get shown to the model in the system prompt, rather than having them in the conversation messages) - [ ] Agents may have some form of long term memory management (key-value? natural lang?). - [ ] Sandboxed python, js interpreters (implemented with containers) - [ ] Support for arbitrary external script tools - [ ] Search - RAG driven search of existing conversation "hey, remind me of the conversation we had six months ago about X") - [ ] Conversation categorization - model driven category creation and conversation classification - [ ] Image input - [ ] Image output (sixel support?) - [ ] Conversation exports to html/pdf/json - [ ] Store message generation model - [ ] Hidden CoT ## UI - [x] Prettify/normalize tool_call and tool_result outputs so they can be shown/optionally hidden in `lmcli view` and `lmcli chat` - [x] Conversation deletion in conversations view - [ ] User confirmation before calling (some?) tools - [ ] Message deletion, Ctrl+D to delete a message and attach its children to its parent, Ctrl+Shift+D to delete a message and its descendents - [ ] Show available key bindings and their action in any given view