lmcli is a (Large) Language Model CLI
Go to file
Matt Low 3fde58b77d Package restructure and API changes, several fixes
- More emphasis on `api` package. It now holds database model structs
  from `lmcli/models` (which is now gone) as well as the tool spec,
  call, and result types. `tools.Tool` is now `api.ToolSpec`.
  `api.ChatCompletionClient` was renamed to
  `api.ChatCompletionProvider`.

- Change ChatCompletion interface and implementations to no longer do
  automatic tool call recursion - they simply return a ToolCall message
  which the caller can decide what to do with (e.g. prompt for user
  confirmation before executing)

- `api.ChatCompletionProvider` functions have had their ReplyCallback
  parameter removed, as now they only return a single reply.

- Added a top-level `agent` package, moved the current built-in tools
  implementations under `agent/toolbox`. `tools.ExecuteToolCalls` is now
  `agent.ExecuteToolCalls`.

- Fixed request context handling in openai, google, ollama (use
  `NewRequestWithContext`), cleaned up request cancellation in TUI

- Fix tool call tui persistence bug (we were skipping message with empty
  content)

- Now handle tool calling from TUI layer

TODO:
- Prompt users before executing tool calls
- Automatically send tool results to the model (or make this toggleable)
2024-06-21 05:24:02 +00:00
pkg Package restructure and API changes, several fixes 2024-06-21 05:24:02 +00:00
.gitignore Update .gitignore 2023-11-04 13:35:23 -06:00
README.md Update README.md 2024-01-11 10:27:11 -07:00
go.mod Remove go-openai 2024-04-29 06:14:36 +00:00
go.sum Remove go-openai 2024-04-29 06:14:36 +00:00
main.go Project refactor, add anthropic API support 2024-03-12 01:01:19 -06:00

README.md

lmcli

lmcli is a (Large) Language Model CLI.

Current features:

  • Perform one-shot prompts with lmcli prompt <message>
  • Manage persistent conversations with the new, reply, view, rm, edit, retry, continue sub-commands.
  • Syntax highlighted output
  • Tool calling, see the Tools section.

Maybe features:

  • Chat-like interface (lmcli chat) for rapid back-and-forth conversations
  • Support for additional models/APIs besides just OpenAI

Tools

Tools must be explicitly enabled by adding the tool's name to the openai.enabledTools array in config.yaml.

Note: all filesystem related tools operate relative to the current directory only. They do not accept absolute paths, and efforts are made to ensure they cannot escape above the working directory). Close attention must be paid to where you are running lmcli, as the model could at any time decide to use one of these tools to discover and read potentially sensitive information from your filesystem.

It's best to only have tools enabled in config.yaml when you intend to be using them, since their descriptions (see pkg/cli/functions.go) count towards context usage.

Available tools:

  • read_dir - Read the contents of a directory.
  • read_file - Read the contents of a file.
  • write_file - Write contents to a file.
  • file_insert_lines - Insert lines at a position within a file. Tricky for the model to use, but can potentially save tokens.
  • file_replace_lines - Remove or replace a range of lines within a file. Even trickier for the model to use.

Install

$ go install git.mlow.ca/mlow/lmcli@latest

Usage

Invoke lmcli at least once:

$ lmcli help

Edit ~/.config/lmcli/config.yaml and set openai.apiKey to your API key.

Refer back to the output of lmcli help for usage.

Enjoy!