this post was submitted on 04 Sep 2024
65 points (90.1% liked)

Ask Lemmy

26890 readers
1732 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics. If you need to do this, try [email protected]


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS
 

Obviously there's not a lot of love for OpenAI and other corporate API generative AI here, but how does the community feel about self hosted models? Especially stuff like the Linux Foundation's Open Model Initiative?

I feel like a lot of people just don't know there are Apache/CC-BY-NC licensed "AI" they can run on sane desktops, right now, that are incredible. I'm thinking of the most recent Command-R, specifically. I can run it on one GPU, and it blows expensive API models away, and it's mine to use.

And there are efforts to kill the power cost of inference and training with stuff like matrix-multiplication free models, open source and legally licensed datasets, cheap training... and OpenAI and such want to shut down all of this because it breaks their monopoly, where they can just outspend everyone scaling , stealiing data and destroying the planet. And it's actually a threat to them.

Again, I feel like corporate social media vs fediverse is a good anology, where one is kinda destroying the planet and the other, while still niche, problematic and a WIP, kills a lot of the downsides.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 2 months ago

As I said in a different thread:

I might be this close to butlerian jihad thought when it comes to AI as an invention

But if it must come to pass, better it be on the back of community owned and controlled models than a couple of megacorps.

[–] [email protected] 1 points 2 months ago (1 children)

Very much pro Open Source AI. Especially as a concept digital public good. With https://petals.dev/ being the most promising option that regard (imagine something like RAG for the arch wiki with very large models supported by the community!).

It feel very enthusiasts right now. Where I feel like I'm just on the cusp of having usable set up.

I personally really want a full Dev that just takes gitlab issues and runs codes against tests until it passes, and then cycles between attempting to explain what it doing and refactoring until that explanation is reasonably simple, then submit PR.

At the moment I am trying to use it as a copilot (ollama lama3, continue, and devonAI vscode plugins) all on my MacBook (my Linux machine were too small gpu wise, at least first time I attempted). That said it ok for questions no real luck on a decent experience for actually making anything.

The next step to me for it to move from enthusiast to hobbiest would be:

  1. Models that just work on my machine. I had to do a lot of trial and error just get performant models.
  2. Models just my use case. I don't know what model support tooling, or multimodal inputs. What models are actually optimized for programing, for actions (ala openinterpretor), for reviewing documents, etc.
  3. For federated (like pedals.dev) I feel like I need some sane data guardrails. I don't want my medical documents anywhere near "bittorrent style" anything, but would absolutely love to leverage it for better outcome on opensource projects without secrets file. This also feeds into point 2 to me.
  4. More sane RAG. Maybe even IPFS links to caches or DBs for popular data sources (like code docs for example).

I feel like there has to be a better way for this. Maybe its just selinux rules for data tags for locking down my local system and some routing config file at the root of my projects. Idk tbh

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (2 children)

Honestly I am not sold on petals, it leaves so many technical innovations behind and its just not really taking off like it needs to.

IMO a much cooler project is the AI Horde: A swarm of hosts, but no splitting. Already with a boatload of actual users.

And (no offense) but there are much better models to use than ollama llama 8b, and which ones completely depends on how much RAM your Mac has. They get better and better the more you have, all the way out to 192GB. (Where you can squeeze in the very amazing Deepseek Code V2)

[–] [email protected] 1 points 2 months ago (1 children)

The splitting is 80% of the cool factor for me. Rather than bog down the one node that can handle those cooler models, and have more contribution opportunities.

I wonder honestly if a petals network could be a target host on horde lol

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (1 children)

The problem is that splitting models up over a network, even over LAN, is not super efficient. The entire weights need to be run through for every half word.

And the other problem is that petals just can't keep up with the crazy dev pace of the LLM community. Honestly they should dump it and fork or contribute to llama.cpp or exllama, as TBH no one wants to split up LLAMA 2 (or even llama 3) 70B, and be a generation or two behind for a base instruct model instead of a finetune.

Even the horde has very few hosts relative to users, even though hosting a small model on a 6GB GPU would get you lots of karma.

The diffusion community is very different, as the output is one image and even the largest open models are much smaller. Lora usage is also standardized there, while it is not on LLM land.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

I guess to me be able to serve the 408b model even though I'm on a laptop is just awesome to me.

Also I saw Lora was an option for Petals but I haven't messed with it at all.

[–] [email protected] 1 points 2 months ago (2 children)

None taken! I'll check out AI Horde!

Is there any objective measured ways or at least subject reviews based metrics for a model on g8ve problem set? I know the white papers tend to include it and sometimes the git repos, but I don't see that info when searching through ollama for example.

I saw you other post about ollama alts and the concurrency mention in one of the projects README sounds promising.

[–] [email protected] 2 points 2 months ago

Oh, and as for benchmarks, check the huggingface open llm leaderbard. The new one.

But take it with a LARGE grain of salt. Some models game their scores in different ways.

There are more niche benchmarks floating around, such as RULER for long context performance. Amazon ran a good array of models to test their mistral finetune: https://huggingface.co/aws-prototyping/MegaBeam-Mistral-7B-512k

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago)

Honestly I would get away from ollama. I don't like it for a number of reasons, including:

Suboptimal quants

suboptimal settings

limited model selection (as opposed to just browsing huggingface)

Sometimes suboptimal performance compared to kobold.cpp, especially if you are quantizing cache, double especially if you are not on a Mac

Frankly a lot of attention squatting/riding off llama.cpp''s development without contributing a ton back.

Rumblings of a closed source project.

I could go on and on, inclding some behavior I just didn't like from the devs, but I think I'll stop, as its really not that bad.

[–] [email protected] 2 points 2 months ago (1 children)

I love the idea, I much prefer it to the mainstream. The problem is, the typical process of documenting FOSS and self-host projects (websites, wiki, mailing lists, etc) move too slow and are too cumbersome for how quick things are developing right now. So people are kind of having to invent the new tech a d new ways to communicate about it, and they're not always making choices that either scale or are easy to find and reference.

Okay, since you seem to be so helpful here, I'll lay out where I'm at. I've been using LLMs like ChatGPT, Copilot, and Bard more professionally. I find them equal parts useful, confusing, annoying, and skeevey. I've got a lil VPS I run for services, I could put a front end on there easy. I've also got an old 8core Xeon machine with like 48GB ram and a leftover AMD R9 270 sitting there with Unraid barely installed. I can chamge the OS of course, but what am I realistically looking at being able to run locally that won't go above like 60-75% usage so I can still eventually get a couple game servers, network storage, and Jellyfin working? I'll be honest I don't care about image generation much, but if I do I can always look into upgrading

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago)

but what am I realistically looking at being able to run locally that won’t go above like 60-75% usage so I can still eventually get a couple game servers, network storage, and Jellyfin working?

Honestly, not much. Llama 8B, but very slowly, or maybe deepseek v2 chat, preprocessed on the 270 with vulkan but mostly running on CPU. And I guess just limit it to 6 threads? I'd host it with kobold.cpp vulkan, or maybe the llama.cpp server if there will be multiple users.

You can try them to see if they feel OK, but llms are just not something that like old hardware. An RTX 3060 (or a Mac, or a 12GB+ AMD GPU) is considered bare minimum in the community, a 3090 or 7900 XTX standard.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago)

OK, so the reaction here seems pretty positive.

But when I bring this up in other threads (or even on Reddit in the few subreddits I still use) the reaction is overwhelmingly negative. Like, I briefly mentioned fixing the video quality issues of an old show in an other fandom with diffusion models, and I felt like I was going to get banned and doxxed.

I see it a lot here too, in any thread about OpenAI or whatever.

[–] [email protected] 10 points 2 months ago (1 children)

I do think, it's good that we're able to self-host these models. Better than not being able to.

But the biggest draw of open-source to me is that I and others in the community can fix things.
It's possible that I just don't understand enough about how these models are created, but right now, it doesn't feel like we're able to fix things.

If the next LLaMa model loses all knowledge of the Uyghur genocide, because Facebook wants to distribute it in China, then I don't know how we'd patch that back in. Even collecting the training data is tricky.

It feels a lot more like Creative Commons than open-source, i.e. you can use what they've created, and you can remix it, but adding to it is not easily possible.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago)

I don’t know how we’d patch that back in. Even collecting the training data is tricky.

You can just take encyclopedia articles and news articles, then train it back in. It's easy! This is not expensive, like $100 if its a really big model, and you are uncensoring a ton of topics?

People uncensor models all the time, its an avenue of research in the LLM community. And in fact, there are many quite good chinese models (like Qwen2) that have been "uncensorsed" by the community.

[–] [email protected] 5 points 2 months ago (2 children)

I'm in favor of a "ML-GPL", where models must be made available for free to those whose data was used to train them.

[–] [email protected] 2 points 2 months ago (1 children)

Publishing a dataset is just inviting legal trouble. Look at all the nonsense Laion had to go through for Laion-5b. I;m not suprised people are not publishing datasets more.

[–] [email protected] 4 points 2 months ago* (last edited 2 months ago)

Practically that just means "open weights" lol. Easier to just do that than track all the sources.

Not that I disagree.

But one sticking point is allowing commercial use, as many companies do like noncommercial licenses so they can make money off them.

[–] [email protected] 11 points 2 months ago* (last edited 2 months ago) (1 children)

Really into local hosting and open LLM’s I’ve largely stepped back due to ‘fatigue’. I’ve downloaded tweaked and reshuffle models and programs then a couple months will pass and it’s lept forward again. Which is good but I figured I’d wait until it slowed a bit.

I will say the fact I can run a decent 7b and even 10b models and get decent responses and times with a 3070 is impressive. AnythingLLM has been a really handy program for me. Still in development but it’s been neat working with RAG. I also moved from textgen to LMStudio and am really liking it. I like textgen but I felt it got a bit side tracked. A lot of good suggestions in here so cheers OP.

[–] [email protected] 5 points 2 months ago* (last edited 2 months ago) (1 children)

You can probably run Nemo 12B pretty quickly, though llama 3.1/gemma 9b finetunes may be better tbh. Deepseek lite v2 code with offloading would still be fast, even though its a 16B, since its such a heavy MoE.

Hardware is such a limiting factor now. Once quad-channel APUs and such start coming out, I feel like it will open up the space, so people don't have to hunt down used 3090s and built desktops around them.

[–] [email protected] 3 points 2 months ago (2 children)

Last I tried was a fimbul merge for 10.4b with rope for creative writing which was great but yeah 3.1 is where I’ve landed lately. I’ll have to check out nemo! Like you mentioned I was sitting on money to grab a 3090 but I think I’ll wait for rtx50xx to drive down prices or just for dedicated hardware. I’ll be sure to keep an eye the AI subs though, clearly there’s a community for it here that’s interested in discussion.

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (1 children)

Oh and I forgot to mention, instead of a 5090, buy AMD Strix Halo if its any good.

I cannot emphasize how awesome 128GB on a fast APU would be. That opens up (admittedly slow, but usable) inference of "huge" models like Mistral Large, and very fast inference of large MoE models like 8x22B.

[–] [email protected] 2 points 2 months ago

Good tips, thanks!! I’ll definitely check it out.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago) (1 children)

rtx50xx

Don't,Nvidia is going to price gouge the snot out of it. Honestly, if you want to buy new, just get a 7900 XTX. Screw Nvidia's pricing on new cards, lol.

fimbul merge for 10.4b

Speaking as someone who's done a lot of merging, the "upscaling" merges are not great. Rope scaling the context is not either. You are better off finding models that were trained at the parameter count and context length you want in the first place, and there is a lot more choice these days.

[–] [email protected] 2 points 2 months ago (1 children)

Oh fuck buying Nvidia new, I was going to see if it depressed 40xx prices or even further for 3090 but I’m not sure it would.

Neat didn’t know that about rope, as you can guess largely due to having fuck all memory to work with. Is AMD viable with LLMs now? Honestly if I can make it work with an AMD GPU I just may because I agree screw Nvidia.

[–] [email protected] 3 points 2 months ago (1 children)

For inference? AMD is more finicky to setup but totally fine once you do. 7900 XTX prices can be very good.

I feel like 3090s have bottomed out, as they are just getting more rare now, and 4090s are so freaking expensive to start with I'm not sure how much they'll come down.

Another feature you might not be aware of, that people use now, is quantized KV cache. With it, I can run a 19GB 35B model and still fit 131K context into vram, with basically no quality loss.

[–] [email protected] 1 points 2 months ago (1 children)

How are you people running cuda kernels?

[–] [email protected] 3 points 2 months ago

rocm

exllama, llama.cpp, vllm/aphrodite, (I think) sglang, they all support it now.

[–] [email protected] 6 points 2 months ago (3 children)

I’m most excited where it’s most open. Clear training process, legal data sets, fully open code bases, published reports, etc. I think we’re going to see the local models boom in sophistication once that’s more common.

Do you know of any good local models that fit that kind of description?

[–] [email protected] 2 points 2 months ago
[–] [email protected] 3 points 2 months ago

I don't know of any super high-quality ones that run well, but the Open Assistant project, (now archived) collected responses from voluntary participants (myself included) to build what is now considered a very high-quality dataset of chat conversation pairs, truly open source, and all voluntarily submitted instead of scraped.

The models are reasonable for fine-tuning, but aren't very good compared to newer models from large companies.

[–] [email protected] 2 points 2 months ago

Cutting edge ones? Unfortunately, rarely. Right now there's a sliding scale between "open and transparent" and "smart and performant" because they're just so darn expensive to train.

I think some of the closest ones to your requirements are Nvidia's research models, excluding Mistral Nemo which isn't as well documented (as its really a Mistral Model). And you can see a lot of the open "alternative" efforts like RWKV, openllama and such are severely underfunded and undertrained.

The datasets are there, the highly optimized implementations are getting there, pieces are there, a lot of of models have detailed papers, fully open codebases, but the funding to actually do it is just too much to deal with most of the time.

Another factor is that "closed" datasets like whatever Mistral, Facebook, Cohere and such use do seem to have an edge.

[–] [email protected] 0 points 2 months ago (2 children)
[–] [email protected] 1 points 2 months ago

Here to stay all the same.

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (1 children)

This is fair. So much about it is awful, even with more "open" AI.

But my counter argument is it's happening anyway. And would you rather be stuck with Fediverse, or Facebook? Because if everyone keeps opposing all AI, we're gonna be stuck with AI Facebook.

[–] [email protected] 3 points 2 months ago (1 children)

I'll put it this way. When I call a company customer service, and they say "in a few words, tell us your issue", what I do is say BLARHVSYKKUCAHN

And they say "I'm sorry. I didn't understand that. Please state the reason for your call."

And again I say "AJNCTHDTKVFRIDJXRI"

And they say "I'm sorry. I didn't understand that. Please state the reason for your call."

And I say "JCFYHCTJCZUIVDJ"

at this point, they either hang up on me, in which case I go see them in person.

OR

They say "I'm having trouble understanding you. Please wait while I connect you to someone who can help."

The reason I do this is because I want to slow any advancement of any AI service, and fill them with garbage data.

And since the 90s I never use my real name online. If I'm signing up for something at Walmart, my name is Bob Wallemarte. Just enough to slip by their automated reject systems, but enough that if I start getting spam for Bob Wallemarte, I know Walmart sold my information.

Then when I sign up for something in the future, I use Walmarts local store address as my home address. So when Walmart wants to mail me spam, they mail it to themselves.

[–] [email protected] 5 points 2 months ago

...In that case, shouldn't you be OK with offline models? No data harvesting is a benefit.

load more comments
view more: next ›