A local command line tool for managing and executing LLamafiles
Built after reading through https://justine.lol/oneliners/ and wanting a quick command line tool to bootstrap and run all the available models easily, as well as get a better grasp of how everything is operating.
|
|
hace 2 años | |
|---|---|---|
| src | hace 2 años | |
| .gitignore | hace 2 años | |
| Cargo.lock | hace 2 años | |
| Cargo.toml | hace 2 años | |
| lemurs.jpg | hace 2 años | |
| main.log | hace 2 años | |
| models.json | hace 2 años | |
| readme.md | hace 2 años | |
| readme.pdf | hace 2 años |
A local command line tool for managing and executing LLamafiles (llamafiles)[https://github.com/Mozilla-Ocho/llamafile]
./lai test
Downloads the llava model and runs an arbitrary test prompt
./lai list
lists all models recognized by local-ai
./lai list offline
lists all models currently downloaded and available to run and serve
./lai {model-name} params
explains the relevant parameters and defines how to pass them to the specified model
./lai {model-name} run {relevant} {model} {parameters}
runs the model using the available parameters that are available to the specified model
./lai {model-name} serve {relevant} {model} {parameters}
creates a listening service to accept model parameter requests