this post was submitted on 07 May 2024
46 points (100.0% liked)
Technology
37717 readers
416 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I managed to get ollama running through Docker easily. It's by far the least painful of the options I tried, and I just make requests to the API it exposes. You can also give it GPU resources through Docker if you want to, and there's a CLI tool for a quick chat interface if you want to play with that. I can get LLAMA 3 (8B) running on my 3070 without issues.
Training a LLM is very difficult and expensive. I don't think it's a good place for anyone to start. Many of the popular models (LLAMA, GPT, etc) are astronomically expensive to train and require and ungodly number of resources.