lmcli/README.md

63 lines
1.9 KiB
Markdown

# lmcli
`lmcli` is a (Large) Language Model CLI.
Current features:
- Perform one-shot prompts with `lmcli prompt <message>`
- Manage persistent conversations with the `new`, `reply`, `view`, and `rm`
sub-commands.
- Syntax highlighted output
Planned features:
- Ask questions about content received on stdin
- "functions" to allow reading (and possibly writing) to files within the
current working directory
Maybe features:
- Natural language image generation, iterative editing
## Tools
There are a few tools available, which must be explicitly enabled by
adding the tool name to the `openai.enabledTools` array in `config.yaml`.
Note: all filesystem related tools operate relative to the current directory
only. They do not accept absolute paths, and all efforts are made to ensure
they cannot escape above the working directory (not quite using chroot, but in
effect). **Close attention must be paid to where you are running `lmcli`, as
the model could at any time decide to use one of these tools to discover and
read potentially sensitive information from your filesystem.**
It's best to only have these tools enabled in `config.yaml` when you intend to
be using them, because their descriptions (see `pkg/cmd/functions.go`) count
towards context usage.
Available tools:
- `read_dir` - Read the contents of a directory
- `read_file` - Read the contents of a file
- `write_file` - Write contents to a file
- `insert_file_lines` - Insert lines at a position within a file. Tricky for
the model to use, but can potentially save tokens.
- `replace_file_lines` - Remove or replace a range of lines within a file. Even
trickier for the model to use.
## Install
```shell
$ go install git.mlow.ca/mlow/lmcli@latest
```
## Usage
Invoke `lmcli` at least once:
```shell
$ lmcli help
```
Edit `~/.config/lmcli/config.yaml` and set `openai.apiKey` to your API key.
Refer back to the output of `lmcli help` for usage.
Enjoy!