lmcli/README.md
2024-06-23 20:46:19 +00:00

163 lines
3.9 KiB
Markdown

# lmcli - Large Language Model CLI
`lmcli` is a versatile command-line interface for interacting with LLMs and LMMs.
## Features
- Multiple model backends (Ollama, OpenAI, Anthropic, Google)
- Customizable agents with tool calling
- Persistent conversation management
- Interactive terminal interface for seamless chat experiences
- Syntax highlighting!
## Screenshots
[TODO: Add screenshots of the TUI in action, showing different views and features]
## Installation
To install `lmcli`, make sure you have Go installed on your system, then run:
```sh
go install git.mlow.ca/mlow/lmcli@latest
```
## Configuration
`lmcli` uses a YAML configuration file located at `~/.config/lmcli/config.yaml`. Here's a sample configuration:
```yaml
defaults:
model: claude-3-5-sonnet-20240620
maxTokens: 3072
temperature: 0.2
conversations:
titleGenerationModel: claude-3-haiku-20240307
chroma:
style: onedark
formatter: terminal16m
agents:
- name: code-helper
tools:
- dir_tree
- read_file
#- write_file
systemPrompt: |
You are an experienced software engineer...
# ...
providers:
- kind: ollama
models:
- phi3:instruct
- llama3:8b
- kind: anthropic
apiKey: your-api-key-here
models:
- claude-3-5-sonnet-20240620
- claude-3-opus-20240229
- claude-3-haiku-20240307
- kind: openai
apiKey: your-api-key-here
models:
- gpt-4o
- gpt-4-turbo
- name: openrouter
kind: openai
apiKey: your-api-key-here
baseUrl: https://openrouter.ai/api/
models:
- qwen/qwen-2-72b-instruct
# ...
```
Customize this file to add your own providers, agents, and models.
## Usage
Here's the default help output for `lmcli`:
```console
$ lmcli help
lmcli - Large Language Model CLI
Usage:
lmcli <command> [flags]
lmcli [command]
Available Commands:
chat Open the chat interface
clone Clone conversations
completion Generate the autocompletion script for the specified shell
continue Continue a conversation from the last message
edit Edit the last user reply in a conversation
help Help about any command
list List conversations
new Start a new conversation
prompt Do a one-shot prompt
rename Rename a conversation
reply Reply to a conversation
retry Retry the last user reply in a conversation
rm Remove conversations
view View messages in a conversation
Flags:
-h, --help help for lmcli
Use "lmcli [command] --help" for more information about a command.
```
### Examples
Start a new chat with the `code-helper` agent:
```console
$ lmcli chat --agent code-helper
```
Start a new conversation, imperative style (no tui):
```console
$ lmcli new "Help me plan meals for the next week"
```
Send a one-shot prompt (no persistence):
```console
$ lmcli prompt "What is the answer to life, the universe, and everything?"
```
## Agents
Agents in `lmcli` are configurations that combine a system prompt with a set of available tools. You can define agents in the `config.yaml` file and switch between them using the `--agent` or `-a` flag.
Example:
```sh
lmcli chat -a financier
```
## Tools
`lmcli` supports various tool calling. Currently built-in tools are:
- `dir_tree`: Display a directory structure
- `read_file`: Read the contents of a file
- `write_file`: Write content to a file
- `file_insert_lines`: Insert lines at a specific position in a file
- `file_replace_lines`: Replace a range of lines in a file
Obviously, some of these tools carry significant risk. Use wisely :)
## Contributing
Contributions to `lmcli` are welcome! Feel free to open issues or submit pull requests on the project repository.
For a full list of planned features and improvements, check out the [TODO.md](TODO.md) file.
## License
MIT
## Acknowledgements
`lmcli` is just a hobby project. Special thanks to the Go community and the creators of the libraries used in this project.