A local command line tool for managing and executing LLamafiles
Built after reading through https://justine.lol/oneliners/ and wanting a quick command line tool to bootstrap and run all the available models easily, as well as get a better grasp of how everything is operating.
|
|
2 anos atrás | |
|---|---|---|
| src | 2 anos atrás | |
| .gitignore | 2 anos atrás | |
| Cargo.lock | 2 anos atrás | |
| Cargo.toml | 2 anos atrás | |
| lemurs.jpg | 2 anos atrás | |
| models.json | 2 anos atrás | |
| readme.md | 2 anos atrás | |
| readme.pdf | 2 anos atrás |
A local command line tool for managing and executing LLamafiles llamafiles
./lai test
Downloads the llava model and runs an arbitrary test prompt
./lai list
lists all models recognized by local-ai
./lai list offline
lists all models currently downloaded and available to run and serve
./lai {model-name} params
explains the relevant parameters and defines how to pass them to the specified model
./lai {model-name} run {relevant} {model} {parameters}
runs the model using the available parameters that are available to the specified model
./lai {model-name} serve {relevant} {model} {parameters}
creates a listening service to accept model parameter requests