1.9 KiB
lmcli
lmcli
is a (Large) Language Model CLI.
Current features:
- Perform one-shot prompts with
lmcli prompt <message>
- Manage persistent conversations with the
new
,reply
,view
, andrm
sub-commands. - Syntax highlighted output
- Tool calling, see the Tools section.
Planned features:
- Ask questions about content received on stdin
- Conversation editing
Maybe features:
- Support for additional models/APIs besides just OpenAI
- Natural language image generation, iterative editing
Tools
There are a few tools available, which must be explicitly enabled by
adding the tool name to the openai.enabledTools
array in config.yaml
.
Note: all filesystem related tools operate relative to the current directory
only. They do not accept absolute paths, and all efforts are made to ensure
they cannot escape above the working directory (not quite using chroot, but in
effect). Close attention must be paid to where you are running lmcli
, as
the model could at any time decide to use one of these tools to discover and
read potentially sensitive information from your filesystem.
It's best to only have these tools enabled in config.yaml
when you intend to
be using them, because their descriptions (see pkg/cmd/functions.go
) count
towards context usage.
Available tools:
read_dir
- Read the contents of a directoryread_file
- Read the contents of a filewrite_file
- Write contents to a fileinsert_file_lines
- Insert lines at a position within a file. Tricky for the model to use, but can potentially save tokens.replace_file_lines
- Remove or replace a range of lines within a file. Even trickier for the model to use.
Install
$ go install git.mlow.ca/mlow/lmcli@latest
Usage
Invoke lmcli
at least once:
$ lmcli help
Edit ~/.config/lmcli/config.yaml
and set openai.apiKey
to your API key.
Refer back to the output of lmcli help
for usage.
Enjoy!