this post was submitted on 13 Mar 2024
4 points (83.3% liked)

ChatGPT

8935 readers
1 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 8 months ago* (last edited 8 months ago)

There's a "models" directory inside the directory where you installed the webui. This is where the model files should go, but they also have supporting files (.yaml or .json) with important metadata about the model.

The easiest way to install a model is to let the webui download the model itself:

Screenshot of Oobaboga's WebUI with the model tab open and the model names from HuggingFace entered

And after it finishes downloading, just load it into memory by clicking the refresh button, selecting it, choosing llama.cpp and then load (perhaps tick the 'CPU' box, but llama.cpp can do mixed CPU/GPU inference, too, if I remember right).

Screen of the model page in Oobaboga's WebUI with the model ready to be loaded

My install is a few months old, I hope the UI hasn't changed to drastically in the meantime :)