A local command line tool for managing and executing LLamafiles
Built after reading through https://justine.lol/oneliners/ and wanting a quick command line tool to bootstrap and run all the available models easily, as well as get a better grasp of how everything is operating.
|
|
il y a 2 ans | |
|---|---|---|
| src | il y a 2 ans | |
| .gitignore | il y a 2 ans | |
| Cargo.lock | il y a 2 ans | |
| Cargo.toml | il y a 2 ans | |
| lemurs.jpg | il y a 2 ans | |
| models.json | il y a 2 ans | |
| readme.md | il y a 2 ans | |
| readme.pdf | il y a 2 ans |
A local command line tool for managing and executing LLamafiles llamafiles
./lai test
Downloads the llava model and runs an arbitrary test prompt
./lai list
lists all models recognized by local-ai
./lai list offline
lists all models currently downloaded and available to run and serve
./lai {model-name} params
explains the relevant parameters and defines how to pass them to the specified model
./lai {model-name} run {relevant} {model} {parameters}
runs the model using the available parameters that are available to the specified model
./lai {model-name} serve {relevant} {model} {parameters}
creates a listening service to accept model parameter requests