lmcli/README.md

60 lines
1.8 KiB
Markdown

# lmcli
`lmcli` is a (Large) Language Model CLI.
Current features:
- Perform one-shot prompts with `lmcli prompt <message>`
- Manage persistent conversations with the `new`, `reply`, `view`, `rm`,
`edit`, `retry`, `continue` sub-commands.
- Syntax highlighted output
- Tool calling, see the [Tools](#tools) section.
Maybe features:
- Chat-like interface (`lmcli chat`) for rapid back-and-forth conversations
- Support for additional models/APIs besides just OpenAI
## Tools
Tools must be explicitly enabled by adding the tool's name to the
`openai.enabledTools` array in `config.yaml`.
Note: all filesystem related tools operate relative to the current directory
only. They do not accept absolute paths, and efforts are made to ensure they
cannot escape above the working directory). **Close attention must be paid to
where you are running `lmcli`, as the model could at any time decide to use one
of these tools to discover and read potentially sensitive information from your
filesystem.**
It's best to only have tools enabled in `config.yaml` when you intend to be
using them, since their descriptions (see `pkg/cli/functions.go`) count towards
context usage.
Available tools:
- `read_dir` - Read the contents of a directory.
- `read_file` - Read the contents of a file.
- `write_file` - Write contents to a file.
- `file_insert_lines` - Insert lines at a position within a file. Tricky for
the model to use, but can potentially save tokens.
- `file_replace_lines` - Remove or replace a range of lines within a file. Even
trickier for the model to use.
## Install
```shell
$ go install git.mlow.ca/mlow/lmcli@latest
```
## Usage
Invoke `lmcli` at least once:
```shell
$ lmcli help
```
Edit `~/.config/lmcli/config.yaml` and set `openai.apiKey` to your API key.
Refer back to the output of `lmcli help` for usage.
Enjoy!