Update README.md
This commit is contained in:
parent
3a0d70db86
commit
034d8d8820
187
README.md
187
README.md
@ -1,59 +1,164 @@
|
|||||||
# lmcli
|
# lmcli - Large Language Model CLI
|
||||||
|
|
||||||
`lmcli` is a (Large) Language Model CLI.
|
`lmcli` is a versatile command-line interface for interacting with LLMs and LMMs.
|
||||||
|
|
||||||
Current features:
|
## Features
|
||||||
- Perform one-shot prompts with `lmcli prompt <message>`
|
|
||||||
- Manage persistent conversations with the `new`, `reply`, `view`, `rm`,
|
|
||||||
`edit`, `retry`, `continue` sub-commands.
|
|
||||||
- Syntax highlighted output
|
|
||||||
- Tool calling, see the [Tools](#tools) section.
|
|
||||||
|
|
||||||
Maybe features:
|
- Multiple model backends (Ollama, OpenAI, Anthropic, Google)
|
||||||
- Chat-like interface (`lmcli chat`) for rapid back-and-forth conversations
|
- Customizable agents with tool calling
|
||||||
- Support for additional models/APIs besides just OpenAI
|
- Persistent conversation management
|
||||||
|
- Interactive terminal interface for seamless chat experiences
|
||||||
|
- Syntax highlighting!
|
||||||
|
|
||||||
## Tools
|
## Screenshots
|
||||||
Tools must be explicitly enabled by adding the tool's name to the
|
|
||||||
`openai.enabledTools` array in `config.yaml`.
|
|
||||||
|
|
||||||
Note: all filesystem related tools operate relative to the current directory
|
[TODO: Add screenshots of the TUI in action, showing different views and features]
|
||||||
only. They do not accept absolute paths, and efforts are made to ensure they
|
|
||||||
cannot escape above the working directory). **Close attention must be paid to
|
|
||||||
where you are running `lmcli`, as the model could at any time decide to use one
|
|
||||||
of these tools to discover and read potentially sensitive information from your
|
|
||||||
filesystem.**
|
|
||||||
|
|
||||||
It's best to only have tools enabled in `config.yaml` when you intend to be
|
## Installation
|
||||||
using them, since their descriptions (see `pkg/cli/functions.go`) count towards
|
|
||||||
context usage.
|
|
||||||
|
|
||||||
Available tools:
|
To install `lmcli`, make sure you have Go installed on your system, then run:
|
||||||
|
|
||||||
- `read_dir` - Read the contents of a directory.
|
```sh
|
||||||
- `read_file` - Read the contents of a file.
|
go install git.mlow.ca/mlow/lmcli@latest
|
||||||
- `write_file` - Write contents to a file.
|
|
||||||
- `file_insert_lines` - Insert lines at a position within a file. Tricky for
|
|
||||||
the model to use, but can potentially save tokens.
|
|
||||||
- `file_replace_lines` - Remove or replace a range of lines within a file. Even
|
|
||||||
trickier for the model to use.
|
|
||||||
|
|
||||||
## Install
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ go install git.mlow.ca/mlow/lmcli@latest
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
`lmcli` uses a YAML configuration file located at `~/.config/lmcli/config.yaml`. Here's a sample configuration:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
defaults:
|
||||||
|
model: claude-3-5-sonnet-20240620
|
||||||
|
maxTokens: 3072
|
||||||
|
temperature: 0.2
|
||||||
|
conversations:
|
||||||
|
titleGenerationModel: claude-3-haiku-20240307
|
||||||
|
chroma:
|
||||||
|
style: onedark
|
||||||
|
formatter: terminal16m
|
||||||
|
agents:
|
||||||
|
- name: code-helper
|
||||||
|
tools:
|
||||||
|
- dir_tree
|
||||||
|
- read_file
|
||||||
|
#- write_file
|
||||||
|
systemPrompt: |
|
||||||
|
You are an experienced software engineer...
|
||||||
|
# ...
|
||||||
|
providers:
|
||||||
|
- kind: ollama
|
||||||
|
models:
|
||||||
|
- phi3:instruct
|
||||||
|
- llama3:8b
|
||||||
|
- kind: anthropic
|
||||||
|
apiKey: your-api-key-here
|
||||||
|
models:
|
||||||
|
- claude-3-5-sonnet-20240620
|
||||||
|
- claude-3-opus-20240229
|
||||||
|
- claude-3-haiku-20240307
|
||||||
|
- kind: openai
|
||||||
|
apiKey: your-api-key-here
|
||||||
|
models:
|
||||||
|
- gpt-4o
|
||||||
|
- gpt-4-turbo
|
||||||
|
- name: openrouter
|
||||||
|
kind: openai
|
||||||
|
apiKey: your-api-key-here
|
||||||
|
baseUrl: https://openrouter.ai/api/
|
||||||
|
models:
|
||||||
|
- qwen/qwen-2-72b-instruct
|
||||||
|
# ...
|
||||||
|
```
|
||||||
|
|
||||||
|
Customize this file to add your own providers, agents, and models.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
Invoke `lmcli` at least once:
|
Here's the default help output for `lmcli`:
|
||||||
|
|
||||||
```shell
|
```console
|
||||||
$ lmcli help
|
$ lmcli help
|
||||||
|
lmcli - Large Language Model CLI
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
lmcli <command> [flags]
|
||||||
|
lmcli [command]
|
||||||
|
|
||||||
|
Available Commands:
|
||||||
|
chat Open the chat interface
|
||||||
|
clone Clone conversations
|
||||||
|
completion Generate the autocompletion script for the specified shell
|
||||||
|
continue Continue a conversation from the last message
|
||||||
|
edit Edit the last user reply in a conversation
|
||||||
|
help Help about any command
|
||||||
|
list List conversations
|
||||||
|
new Start a new conversation
|
||||||
|
prompt Do a one-shot prompt
|
||||||
|
rename Rename a conversation
|
||||||
|
reply Reply to a conversation
|
||||||
|
retry Retry the last user reply in a conversation
|
||||||
|
rm Remove conversations
|
||||||
|
view View messages in a conversation
|
||||||
|
|
||||||
|
Flags:
|
||||||
|
-h, --help help for lmcli
|
||||||
|
|
||||||
|
Use "lmcli [command] --help" for more information about a command.
|
||||||
```
|
```
|
||||||
|
|
||||||
Edit `~/.config/lmcli/config.yaml` and set `openai.apiKey` to your API key.
|
### Examples
|
||||||
|
|
||||||
Refer back to the output of `lmcli help` for usage.
|
Start a new chat with the `code-helper` agent:
|
||||||
|
|
||||||
Enjoy!
|
```console
|
||||||
|
$ lmcli chat --agent code-helper
|
||||||
|
```
|
||||||
|
|
||||||
|
Start a new conversation, imperative style (no tui):
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ lmcli new "Help me plan meals for the next week"
|
||||||
|
```
|
||||||
|
|
||||||
|
Send a one-shot prompt (no persistence):
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ lmcli prompt "What is the answer to life, the universe, and everything?"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Agents
|
||||||
|
|
||||||
|
Agents in `lmcli` are configurations that combine a system prompt with a set of available tools. You can define agents in the `config.yaml` file and switch between them using the `--agent` or `-a` flag.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```sh
|
||||||
|
lmcli chat -a financier
|
||||||
|
```
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
`lmcli` supports tool calling. The following built-in tools are currently available:
|
||||||
|
|
||||||
|
- `dir_tree`: Display a directory structure
|
||||||
|
- `read_file`: Read the contents of a file
|
||||||
|
- `write_file`: Write content to a file
|
||||||
|
- `file_insert_lines`: Insert lines at a specific position in a file
|
||||||
|
- `file_replace_lines`: Replace a range of lines in a file
|
||||||
|
|
||||||
|
Obviously, some of these tools carry significant risk. Use wisely :)
|
||||||
|
|
||||||
|
More tool features are planned, including the to define arbitrary tools which call out to external scripts, tools to spawn sub-agents, perform web searches, etc.
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
Contributions to `lmcli` are welcome! Feel free to open issues or submit pull requests on the project repository.
|
||||||
|
|
||||||
|
For a full list of planned features and improvements, check out the [TODO.md](TODO.md) file.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
|
|
||||||
|
## Acknowledgements
|
||||||
|
|
||||||
|
`lmcli` is just a hobby project. Special thanks to the Go community and the creators of the libraries used in this project.
|
||||||
|
Loading…
Reference in New Issue
Block a user