lmcli is a (Large) Language Model CLI
Go to file
Matt Low 4ef841e945 Add modify_file tool
Removed file_insert_lines, file_replace_lines
2024-07-01 08:09:04 +00:00
pkg Add modify_file tool 2024-07-01 08:09:04 +00:00
.gitignore Update .gitignore 2023-11-04 13:35:23 -06:00
go.mod Update to yaml.v3 2024-06-23 04:04:01 +00:00
go.sum Update to yaml.v3 2024-06-23 04:04:01 +00:00
main.go Project refactor, add anthropic API support 2024-03-12 01:01:19 -06:00
README.md Update README.md 2024-06-23 21:42:23 +00:00
TODO.md Update TODO 2024-06-24 17:45:18 +00:00

lmcli - Large ____ Model CLI

lmcli is a versatile command-line interface for interacting with LLMs and LMMs.

Features

  • Multiple model backends (Ollama, OpenAI, Anthropic, Google)
  • Customizable agents with tool calling
  • Persistent conversation management
  • Message branching (edit and re-prompt to your heart's desire)
  • Interactive terminal interface for seamless chat experiences
  • Utilizes $EDITOR, write and edit prompts from the comfort of your own editor :)
  • vi-like bindings
  • Syntax highlighting!

Screenshots

[TODO: Add screenshots of the TUI in action, showing different views and features]

Installation

To install lmcli, make sure you have Go installed on your system, then run:

go install git.mlow.ca/mlow/lmcli@latest

Configuration

lmcli uses a YAML configuration file located at ~/.config/lmcli/config.yaml. Here's a sample configuration:

defaults:
  model: claude-3-5-sonnet-20240620
  maxTokens: 3072
  temperature: 0.2
conversations:
  titleGenerationModel: claude-3-haiku-20240307
chroma:
  style: onedark
  formatter: terminal16m
agents:
  - name: coder
    tools:
      - dir_tree
      - read_file
      #- write_file
    systemPrompt: |
      You are an experienced software engineer...      
  # ...
providers:
  - kind: ollama
    models:
      - phi3:instruct
      - llama3:8b
  - kind: anthropic
    apiKey: your-api-key-here
    models:
      - claude-3-5-sonnet-20240620
      - claude-3-opus-20240229
      - claude-3-haiku-20240307
  - kind: openai
    apiKey: your-api-key-here
    models:
      - gpt-4o
      - gpt-4-turbo
  - name: openrouter
    kind: openai
    apiKey: your-api-key-here
    baseUrl: https://openrouter.ai/api/
    models:
      - qwen/qwen-2-72b-instruct
  # ...

Customize this file to add your own providers, agents, and models.

Syntax highlighting

Syntax highlighting is performed by Chroma.

Refer to Chroma/styles for available styles (TODO: add support for custom Chroma styles).

Available formatters:

  • terminal - 8 colors
  • terminal16 - 16 colors
  • terminal256 - 256 colors
  • terminal16m - true color (default)

Agents

Agents in lmcli combine a system prompt with a set of available tools. Agents are defined in config.yaml and are called upon with the -a/--agent flag.

Agent functionality is expected to be expanded on, bringing them to close parity with something like OpenAI's "Assistants" feature.

Tools

Tools are used by agents to acquire information from and interact with external systems. The following built-in tools are available:

  • dir_tree: Display a directory structure
  • read_file: Read the contents of a file
  • write_file: Write content to a file
  • file_insert_lines: Insert lines at a specific position in a file
  • file_replace_lines: Replace a range of lines in a file

Obviously, some of these tools carry significant risk. Use wisely :)

More tool features are planned, including the ability to define arbitrary tools which call out to external scripts, tools to spawn sub-agents, perform web searches, etc.

Usage

$ lmcli help
lmcli - Large Language Model CLI

Usage:
  lmcli <command> [flags]
  lmcli [command]

Available Commands:
  chat        Open the chat interface
  clone       Clone conversations
  completion  Generate the autocompletion script for the specified shell
  continue    Continue a conversation from the last message
  edit        Edit the last user reply in a conversation
  help        Help about any command
  list        List conversations
  new         Start a new conversation
  prompt      Do a one-shot prompt
  rename      Rename a conversation
  reply       Reply to a conversation
  retry       Retry the last user reply in a conversation
  rm          Remove conversations
  view        View messages in a conversation

Flags:
  -h, --help   help for lmcli

Use "lmcli [command] --help" for more information about a command.

Examples

Start a new chat with the coder agent:

$ lmcli chat --agent coder

Start a new conversation, imperative style (no tui):

$ lmcli new "Help me plan meals for the next week"

Send a one-shot prompt (no persistence):

$ lmcli prompt "What is the answer to life, the universe, and everything?"

Contributing

Contributions to lmcli are welcome! Feel free to open issues or submit pull requests on the project repository.

For a full list of planned features and improvements, check out the TODO.md file.

License

To be determined

Acknowledgements

lmcli is a small hobby project. Special thanks to the Go community and the creators of the libraries used in this project.