this post was submitted on 24 Jan 2025
1 points (100.0% liked)

Free Open-Source Artificial Intelligence

3119 readers
2 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
top 8 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 3 weeks ago

200 tokens per second isn't achievable with a 1.5B even on low-midrange GPUs. Unless they're attaching an external GPU it's not happening on a raspberry pi.

This article is disjointed and smells like AI.

[–] [email protected] 0 points 3 weeks ago (1 children)

I was using their 7B model and it was kinda poop. Gonna try the 14B one next when I get home

[–] [email protected] 0 points 3 weeks ago

Just tried some of them today and they failed at trivial (for a human junior programmer) code modifications.

[–] [email protected] 0 points 3 weeks ago (1 children)

Yeah, my computer also runs a game at 200fps. But I'm not saying if it's Minesweeper or a recent AAA game...

[–] [email protected] 0 points 3 weeks ago (1 children)

Yea.. it's not quite the same thing to actually run DeepSeek R1, a 671B model, and for example DeepSeek-R1-Distill-Qwen-1.5B

[–] [email protected] 0 points 3 weeks ago

A recent i7 on CPU only can manage qwen 1.5 in a satisfactory way, comparable to big online players. Curious about recent ultra Intel and snapdragons

[–] [email protected] 0 points 3 weeks ago (1 children)

How. I though u needed huge amounts of vram on exorbitantly prices GPUs to run LLM with decent capacity? Are the just running a really small model or is it hyper parametrised? Or is the "thinking" process just that effective u can make up for a weak LLM?

[–] [email protected] 0 points 3 weeks ago

Even though it is the smallest of the distilled models that model still outperforms GPT 4o and Claude Sonnet 3.5.

The 7B parameter models crush the older models on performance benchmarks. The 14 billion parameter model is very competitive with OpenAI o1 mini in many metrics.

Yea sounds like it's their smallest model