4.1 KiB
lmcli - Large Language Model CLI
lmcli
is a versatile command-line interface for interacting with LLMs and LMMs.
Features
- Multiple model backends (Ollama, OpenAI, Anthropic, Google)
- Customizable agents with tool calling
- Persistent conversation management
- Interactive terminal interface for seamless chat experiences
- Syntax highlighting!
Screenshots
[TODO: Add screenshots of the TUI in action, showing different views and features]
Installation
To install lmcli
, make sure you have Go installed on your system, then run:
go install git.mlow.ca/mlow/lmcli@latest
Configuration
lmcli
uses a YAML configuration file located at ~/.config/lmcli/config.yaml
. Here's a sample configuration:
defaults:
model: claude-3-5-sonnet-20240620
maxTokens: 3072
temperature: 0.2
conversations:
titleGenerationModel: claude-3-haiku-20240307
chroma:
style: onedark
formatter: terminal16m
agents:
- name: code-helper
tools:
- dir_tree
- read_file
#- write_file
systemPrompt: |
You are an experienced software engineer...
# ...
providers:
- kind: ollama
models:
- phi3:instruct
- llama3:8b
- kind: anthropic
apiKey: your-api-key-here
models:
- claude-3-5-sonnet-20240620
- claude-3-opus-20240229
- claude-3-haiku-20240307
- kind: openai
apiKey: your-api-key-here
models:
- gpt-4o
- gpt-4-turbo
- name: openrouter
kind: openai
apiKey: your-api-key-here
baseUrl: https://openrouter.ai/api/
models:
- qwen/qwen-2-72b-instruct
# ...
Customize this file to add your own providers, agents, and models.
Usage
Here's the default help output for lmcli
:
$ lmcli help
lmcli - Large Language Model CLI
Usage:
lmcli <command> [flags]
lmcli [command]
Available Commands:
chat Open the chat interface
clone Clone conversations
completion Generate the autocompletion script for the specified shell
continue Continue a conversation from the last message
edit Edit the last user reply in a conversation
help Help about any command
list List conversations
new Start a new conversation
prompt Do a one-shot prompt
rename Rename a conversation
reply Reply to a conversation
retry Retry the last user reply in a conversation
rm Remove conversations
view View messages in a conversation
Flags:
-h, --help help for lmcli
Use "lmcli [command] --help" for more information about a command.
Examples
Start a new chat with the code-helper
agent:
$ lmcli chat --agent code-helper
Start a new conversation, imperative style (no tui):
$ lmcli new "Help me plan meals for the next week"
Send a one-shot prompt (no persistence):
$ lmcli prompt "What is the answer to life, the universe, and everything?"
Agents
Agents in lmcli
are configurations that combine a system prompt with a set of available tools. You can define agents in the config.yaml
file and switch between them using the --agent
or -a
flag.
Example:
lmcli chat -a financier
Tools
lmcli
supports tool calling. The following built-in tools are currently available:
dir_tree
: Display a directory structureread_file
: Read the contents of a filewrite_file
: Write content to a filefile_insert_lines
: Insert lines at a specific position in a filefile_replace_lines
: Replace a range of lines in a file
Obviously, some of these tools carry significant risk. Use wisely :)
More tool features are planned, including the to define arbitrary tools which call out to external scripts, tools to spawn sub-agents, perform web searches, etc.
Contributing
Contributions to lmcli
are welcome! Feel free to open issues or submit pull requests on the project repository.
For a full list of planned features and improvements, check out the TODO.md file.
License
To be determined
Acknowledgements
lmcli
is just a hobby project. Special thanks to the Go community and the creators of the libraries used in this project.