A local command line tool for managing and executing LLamafiles

Built after reading through https://justine.lol/oneliners/ and wanting a quick command line tool to bootstrap and run all the available models easily, as well as get a better grasp of how everything is operating.

jake kalstad 117bcb37cd fix readme 2 年之前
src 10c009385b initial 2 年之前
.gitignore b59ea0df82 update git ignore 2 年之前
Cargo.lock 10c009385b initial 2 年之前
Cargo.toml 10c009385b initial 2 年之前
lemurs.jpg 10c009385b initial 2 年之前
models.json 10c009385b initial 2 年之前
readme.md 117bcb37cd fix readme 2 年之前
readme.pdf 10c009385b initial 2 年之前

readme.md

A local command line tool for managing and executing LLamafiles llamafiles

Quick Start

./lai test

Downloads the llava model and runs an arbitrary test prompt

list all available models

./lai list

lists all models recognized by local-ai

list all downloaded models

./lai list offline

lists all models currently downloaded and available to run and serve

explain model

./lai {model-name} params

explains the relevant parameters and defines how to pass them to the specified model

run model

./lai {model-name} run {relevant} {model} {parameters}

runs the model using the available parameters that are available to the specified model

serve model

./lai {model-name} serve {relevant} {model} {parameters}

creates a listening service to accept model parameter requests