brokenlcd

joined 2 years ago
[–] [email protected] 0 points 6 hours ago* (last edited 6 hours ago)

Did you say you’re using a x1 riser though? That splits it to a sixteenth of the bandwidth—maybe I’m misunderstanding what you mean by x1.

not exactly, what i mean by x1 riser is one of these bad boys they are basically extension cords for a x1 pcie link, no bifurcation. the thinkcenter has 1 x16 slot and two x1 slots. my idea for the whole setup was to have the 3060 i'm getting now into the x16 slot of the motherboard, so it can be used for other tasks as well if need's be; while the second 3060 would be placed in one of the x1 slots the motherboard has via the riser; since from what i managed to read it should only affect the time to first load the model. but the fact you only mentioned the x16 slot does make me worry if there is some handicap to the other two x1 slots.

of course, the second card will come down the line; don't have nearly enough money for two cards and the thinkcentre :-P.

started with my decade-old ThinkPad inferencing Llama 3.1 8B at about 1 TPS

pretty mutch same story, but with the optiplex and the steam deck. come to think of it, i do need to polish and share the scripts i wrote for the steam deck, since i designed them to be used without a dock, they're a wonderful gateway drug to this hobby :-).

there’s a popular way to squeeze performance through Mixture of Experts (MoE) models.

yeah, that's a little too out of scope for me, i'm more practical with the hardware side of things, mostly due to lacking hardware to really get into the more involved stuff. though it's not out of question for the future :-).

Tesla P100 16GB

i am somewhat familiar with these bad boys, we have an older poweredge server full of them at work, where it's used for fluid simulation, (i'd love to see how it's set up, but can't risk bricking the workhorse) but the need to figure out a cooling system for these cards, plus the higher power draw made it not really feasible in my budget unfortunately.

[–] [email protected] 0 points 17 hours ago* (last edited 17 hours ago) (2 children)

Is bifurcation necessary because of how CUDA works, or because of bandwidth restraints? Mostly asking because for the secondary card i'll be limited by the x1 link mining risers have (and also because unfortunately both machines lack that capability. :'-) )

Also, if i offload layers to the GPU manually, so that only the context needs to overflow into RAM, will that be less of a slowdown, or will iti comparable to letting model layers into ram? (Sorry for the question bombing, i'm trying understand how mutch i can realistically push the setup before i pull the trigger)

[–] [email protected] 0 points 1 day ago (4 children)

You need a 15$ electrical relay board that sends power from the motherboard to the second PSU or it won't work.

If you are talking about something like the add2psu boards that jump the PS_ON line of the secondary power supply on when the 12v line of the primary one is ready. Then i'm already on it the diy way. Thanks for the heads up though :-).

expect 1-5token per second (really more like 2-3).

5 tokens per seconds would be wonderful compared to what i'm using right now, since it averages at ~ 1,5 tok/s with 13B models. (Koboldcpp through vulkan on a steam deck) My main concerns for upgrading are bigger context/models + trying to speed up prompt processing. But i feel like the last one will also be handicapped by offloading to RAM.

How much vram is the 3060 youre looking at?

I'm looking for the 12GB version. i'm also giving myself space to add another one (most likely through a 1x mining riser) if i manage to save up enough another card in the future to bump it up to 24 gb with parallel processing, though i doubt i'll manage.

Sorry for the wall of text, and thanks for the help.

 

I have an unused dell optiplex 7010 i wanted to use as a base for an interference rig.

My idea was to get a 3060, a pci riser and 500w power supply just for the gpu. Mechanically speaking i had the idea of making a backpack of sorts on the side panel, to fit both the gpu and the extra power supply since unfortunately it's an sff machine.

What's making me weary of going through is the specs of the 7010 itself: it's a ddr3 system with a 3rd gen i7-3770. I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl. (Using koboldcpp, if that matters.)

Do you think it's even worth going through?

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago)

I don't have adhd, but femtanyl, kmfdm and justice are wonderful when crunching for exams.

[–] [email protected] 3 points 1 week ago

No no no. You don't get it. The turd is turning into a werewolf mid-shit.

[–] [email protected] 4 points 1 week ago

Tbh, everytime i see soulless corporations trying to look more amicable like this, the only thing that comes to mind is "✨some pretty colors aren't going to erase your sins✨" said in the most cutesy voice imaginable.

[–] [email protected] 4 points 1 week ago (1 children)

Don't give me ideas... I love spicy stuff, and it has been a pretty good deterrent in of itself from having my foodstuffs stolen. So two birds with one stone...

[–] [email protected] 9 points 1 week ago* (last edited 1 week ago) (2 children)

amphotericity is some weird shit, so yes. Water also an acid. (100% butchered the translation)

[–] [email protected] 0 points 1 week ago

It's so hot outside even the birb's melting

[–] [email protected] 3 points 1 week ago

Yeah, but they can't get you to scroll through all the ads if they don't water it down to hell.

[–] [email protected] 21 points 1 week ago (4 children)

I remember solving something similar using an opaque bottle with "GI supplements, don't drink" written in sharpie. Especially since the first time it was actually true and they didn't believe the warning.

view more: next ›